Artificial Intelligence

[Ed. Note: Please welcome guest blogger, Colin Lachance, CEO of Compass/vLex Canada. – GL]

At no fewer than four conferences this lovely month of May, I will be speaking about artificial intelligence in law. Each event has a different focus (regulation, impact, libraries, family law), but as my comments in each will spring from my personal framework for considering issues, opportunities and implications, I thought it might help me to advance that framework for your feedback.

Continue Reading Le Joli

I recently started a new job in a different building in downtown Toronto. I left law firms after working in firms for almost 15 years. Afraid I would miss law firm life, I thought binge watching Suits, would help remind me of my previous life.  As it turns out, the show is also filmed in my new office tower, so there was too much coincidence in it all to ignore the serendipity.  I am embraced Suits, and now Mike Ross (photo courtesy of and Harvey Spector occupy time in my brain alongside my daily work conversations about legal applications for artificial intelligence, how machine learning and natural language processing can be applied to legal due diligence, competitive intelligence, deal structure precedents and contract automation. I also spend time looking at research tools and platforms, those produced by Thomson Reuters for whom I work and others in the industry. I try to define use cases, articulate what I think to be the value in the market and understand how all of these products are shaping the future of both the practice and business of law.  And more often than I should, wonder if Mike Ross and/or Pearson Hardman use Practical Law checklists, Neota Logic solutions or Handshake software for example.  It  would make sense to me that a non-lawyer pretending to be a lawyer would use these or similar tools and we never actually see Mike doing anything other than research in very traditional ways.  More to the point it occurs to me very often, that regardless of where you sit on the “robots are coming for the legal profession” continuum whether you think it is happening tomorrow or never – you can’t ignore that in the face of wanting (or being forced) to increase practice efficiency, the industry has created tools that are so sophisticated that someone without a law degree might be able to practice law. Would not having a law degree count as a form of “artificial intelligence” in the practice of law? Assuming you were able to actually get away with it in real life for as long a period as it seems Mike Ross can.
We know that in most jurisdictions, there is still a ban on non-lawyer ownership of firms and we know that para-professionals are doing more and more for clients who require legal services.  Alternative providers are disrupting the industry and incremental change is happening everyday. I firmly believe there won’t be a big bang but a slow and steady change to the way law is practiced and how the services are bought and sold. If I have learned anything reading this blog, it is that the legal industry is experiencing its industrial revolution moment in every possible way.

I haven’t watched all six or seven seasons of Suits yet, so please don’t spoil it for me, but the question of who could practice and provide legal advice always seemed sacred to me, a  (qualified) lawyer had to over-see the robots, go to court, and be sworn in as a judge, but maybe that too will change?  Could fiction become reality? 

One of the highlights of the American Association of Law Libraries (AALL) conference in Austin this year was the Innovation Tournament which pitted three librarians’ tech innovations against each other. With two prizes, each worth $2,500, up for grabs, the competition was pretty tough. There was a scanning project management innovation, a Virtual Reality presentation preparedness tool, and an innovative ChatBot for legal information assistance. The ChatBot really caught my attention as something that I would love to test out on a local level.

We’ve all probably seen ChatBots either on Facebook, or when we go to ask for customer service with our favorite online retailers. The idea is pretty simple in that it basically automates many of your frequently asked questions (FAQs) into an interactive chat session. From requesting a maintenance man out to fix your dishwasher to ordering a pizza, ChatBots are out there to handle repetitive tasks.

Imagine the FAQs for law libraries that a ChatBot could help answer.

  1. Point to practice groups specific materials
  2. Pull cases 
  3. Reset passwords 
  4. Identify book locations on a shelf in the office 
  5. [fill in the blank]
I would guess that pretty much anything that you have on your library portal page as a frequently answered question could be leveraged into a ChatBot.
There has to be a good role that vendors could play in helping law librarians out with some of these features. I can see where a citation pull could go directly into a Westlaw, Lexis, Fastcase, etc. citation box and retrieve the material automatically. Passwords could be reset through an API connection either directly from the vendor, or if you are using a content management system (CMS) like Research Monitor or OneLog, you could verify the person’s identity and recover the password from your CMS. The ChatBot could interface with the library catalog and use the wealth of content knowledge contained there to quickly isolate materials the attorney needs. It could even point out that obscure print material that you’ve kept updated for just this specific occasion. 
There is a great opportunity for a collaboration on the local level for the IT departments and the Law Library to create ChatBots which highlight the functionality of the organization’s web portal. There is also an opportunity for law librarian and legal information vendor collaboration for quickly pointing people to the right resources at the right moment. I understand that there are a number of logistical issues regarding whether ChatBots are locally hosted or cloud-based; is it only available on the local network, or can it be an app on the lawyers’ phones; and how do we work with the different variations of ChatBot platforms that are out there? Even with all these issues, I still think there is a great opportunity here for those willing to blaze a trail with using ChatBots in their law libraries. 
Photo by Charlz Gutiérrez De Piñeres on Unsplash

What I am about to write is completely anecdotal, but I think is relevant to the current disruption that we are seeing in the legal industry when it comes to automation of legal tasks. I know, most of you are asking, “how does that vary from all your other posts, Greg?” Quite frankly, it doesn’t, but I wanted to warn the readers that this one is my experience, and your mileage may vary.

I want to paraphrase something that I heard last week from a guest speaker at the AALL conference in Austin.

Lawyers don’t like automation of tasks because it cuts into their billable hours, and thus it costs them money.

This is a good line to say to a bunch of law librarians and legal tech professionals, but it’s kind of a cheap line, and in my experience, not all that accurate. It’s a line that has been said in different forms for the twenty years or more that Knowledge Management processes have been hailed as best practices for law firms. Add to that the history of business process improvements, Six-Sigma implementation, and now Artificial Intelligence and Machine Learning, and you’ve got a new platform to tell the story of “the attorneys won’t adopt these ideas because it will cut the time they charge their clients.”

I have to say that I have yet to talk with an attorney that hinted that this was a serious barrier for efficiency. Oh, I am positive that some of you have run into these attorneys, I’m just saying that it has not been my personal experience.

What I have seen, however, is the challenge of implementing these processes and tools into the workflow of the attorney without causing a major disruption, at least initially, in their ability to do the work. Sometimes this disruption lasts for months. Again, I’ll give you another anecdote.

When I was at ILTA last year, I watched an amazing presentation from some very forward thinking lawyers who created automation tools and machine learning techniques to process a type of transactional documents. The outcome was pretty amazing, and reduced the time to process documents down from dozens of hours down to a few minutes or hours. Plunk in the data… press the “go” button, and watch the machines do the work. The idea was to make the lawyers focus on what they are really good at, and that is dealing with the highest risks the clients face, and not waste time on no-risk, or low-risk items in the portfolio. Lawyers could then charge an alternative fee deal that still made them a nice profit, but at the same time, reduced the clients overall spend. On top of all of that, it also sped up the time spent on the matter.

Now, you might read that last part and say, “I can see why lawyers would refuse to do that. It cuts their own throats by making less revenue.” That sounds like a solid interpretation. However, let me add in one more detail to the story which I got after the presentation when I asked the presenters this question. “How long did it take you to automate this one type of deal, and how many people did it take to get it operational?”

The answer was that it took six to eight months, four or five consultants and programmers, and two or three attorneys who could test the system as it was being created, and give feedback. That was for one type of deal. I don’t think I’m going too far out on a limb here to say that the cost of this was probably in the mid six-figure range or higher.

Granted, the first item brought to market is the most expensive, and it is very possible that the next type of deal would only take a few weeks to bring online, and a diminishing amount of time for the next deal type, and the next deal type. How many law firms are going to take this risk with the upfront costs in the hopes that eventually they will get a return on their investment?

So let’s get back to the idea that lawyers don’t like automation because it costs them billable hours. I think that the real answer is that most lawyers don’t like automation because the change is too costly, both in time and money. High risk can mean high reward, but it is still a high risk.

Perhaps the story I’m using here is a situation where we attempt to do too much all at once. I’m a big believer that law firms don’t lack for resources which improve overall efficiency. What they actually lack is actually applying the existing resources they have. Instead of looking at the latest bleeding-edge technology that promises reducing months of time to seconds, look to the tools you’ve already bought that will reduce ten minutes off an hour of work. It’s not as cool, but it is more likely to work.

One of the best things that sometimes happens at professional conferences is when a speaker says something that makes you sit up and pay attention. That happened to me at the British Irish Association of Law Libraries (BIALL) meeting earlier this month when Nick West, Chief Strategy Officer for Mishcon de Reya LLP, paraphrased chess grand master Garry Kasparov and stated:

An intermediate chess player and a computer beats a Grand Master chess player with a computer.

Now for a little background on why that made me sit up and pay attention.

There’s been so much buzz about how Artificial Intelligence (AI) is going to replace so much of the legal industry, that it is overshadowing something that we’ve known in practice for over twenty-five years. Lawyers and technology have a symbiotic relationship that will continue to deepen as technology advances. Yes, some tasks that are now conducted by lawyers will eventually be completed by technology, but there will also be tasks that at this time can only be conducted by highly trained and experienced lawyers, that will someday soon be accomplished by intermediately experienced lawyers with the assistance of advanced technology. This isn’t some amazing claim; it has happened already, and will continue to happen into the future.

I talked with Nick West after his presentation, and he pointed me to an article from Deloitte authored by Jim Guszcza, Harvey Lewis, and Peter Evans-Greenwood entitled, Cognitive Collaboration: Why Humans and Computers Think Better Together. In the article, the authors discuss that the concept of AI mimicking human thinking is a distraction from where the real value in “the ways in which machine intelligence and human intelligence complement one another.” The concept that they believe needs more focus than AI is Intelligence Augmentation where “the ultimate goal is not building machines that think like humans, but designing machines that help humans think better.”

I won’t rehash what Guszcza et al, write in their article, but I do want to go back the Kasparov statement mentioned earlier, and how I see this applying to the legal field that could mean that an average or intermediate skilled lawyer and technology could be better than an expert lawyer, even if that lawyer had the same technology. First of all, let me state the actual quote from Gary Kasparov from his New York Review of Books article, “The chess master and the computer.”

Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.

Kasparov came to this conclusion after he held a “freestyle chess” match where any combination of computer and human chess players were allowed to compete. The result was that a couple of American amateur players using three computers simultaneously won over grand masters, even when the grand masters used state-of-the-art computers to aid them. Kasparov evaluated the amateurs won because they were “skill[ed] at manipulating and ‘coaching’ their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants.” In other words, they understood how to leverage the computers’ to perform freestyle chess, and that skill proved greater than the skills needed to be a chess master playing freestyle chess.

Will the same concept apply to the legal field? I would say that the probability is very strong. The skills needed for the future lawyer, aren’t necessarily the skills needed for an expert big law partner today. Those lawyers* who understand the legal concepts and applications and apply the proper technology using a superior process can be better, faster, and cheaper than the highly skilled lawyers who cannot or will not leverage that technology, or who cannot understand how to effectively leverage the application of that technology.

I would say that this is not a future concept, but rather one that arrived in the legal profession years ago. As William Gibson famously said, “The future is already here – it’s just not evenly distributed.” There are already a number of “average” lawyers using technology and effective processes who are already out there. As the technology improves and helps additional average lawyers think better, it is going to be harder for expert lawyers to prove their value. The skills needed to win in Freestyle Legal Practice will be held by those that understand the roles played by the lawyer, the technology, and the process.

*The fact that I got this idea from attending a law library conference may mean that it’s not just the average lawyer who may be the only ones able to leverage these skills.

Roommate needed: Must like to watch anime, wake me up with a good morning, let me know if I need an umbrella for work today, text me during the day, turn on the lights as I’m coming home, and tell me good night as I drift off to sleep. I’m looking for someone with blue hair and a cute dress, and comes in her own 12 pound sealed container. I’ll pay up to  $3,000 to have you shipped from Japan, and will wait up to a year for you to arrive. Because, apparently, I’m extremely lonely.

Enter: Gatebox’s Virtual Home Robot, Azuma Hikari.

Okay… I haven’t quite decided yet whether this is cool, or creepy. It’s probably both. Actually, it’s a lot of both. But, it is probably the next phase in integrated home technology and personal interactive technology. The video below shows the lonely Japanese professional and how his day is brightened by the sweet robot’s comments, suggestions, tweets, and courteous actions throughout the day. Once you get passed the weirdness of it all, I think that you can see the attraction (both in a good way, and in a bad way) of how this type of technology will advance the likes of what we have with Apple’s Siri, and Amazon’s Alexa voice response systems.

The system is being presold now (around $3,000 with shipping), and as of last week there have been over 200 units sold. A more extensive, 27 minute explanation and demonstration is available from Tokyo Otaku Mode’s Facebook page.

Creepy or not, I think this is the first of many virtual holographic personal assistants to come.

Ron Friedman recently posted the following video to twitter.

Ron and I have talked about this a lot, going back to my AI posts last December when
I suggested that we stop using the term Artificial Intelligence in
legal because it causes more confusion, consternation, and general
trouble than it’s worth.

First, to answer Ron’s
question, why all the AI hype in the legal market?  The AI hype isn’t
happening in the legal market.  It’s happening throughout the world. 
It’s now in our homes with Nest Thermostats and Hue light bulbs.  It’s in
our pockets with Siri, and in our offices with Alexa. It’s the basis of
one of the most engrossing shows on HBO right now, Westworld.  And we
still have brilliant people like Elon Musk and Stephen Hawking warning
that AI will likely kill us if we don’t take precautions. What we’re
seeing in the legal market is just bleeding-through from the massive
hype happening in the rest of the world.  And I think it’s all about to
come crashing down.  We will shortly enter into the great Trough of
Disillusionment for AI.

I don’t say that because I
think AI will fail to live up to its promise.  On the contrary, I think
AI will way outstrip our current expectations.  However, we humans are
fickle.  Our expectations shift quickly. Louis C.K. explains it best in
his routine about Airplane WiFi
In the AI space, this same fickle attitude leads to an interesting
phenomenon, over time we adjust what we believe qualifies as AI.  The
more common a technology becomes the less we believe it to be Artificial

Google isn’t considered AI, but it
‘knows’ what you’re typing as you type, and then it filters a large
portion of the web to give you the most relevant pages.  It would have
easily been seen as AI twenty years ago.  Siri and Alexa personal
assistants respond to voice commands and can return information
instantly or actually perform tasks online, but they are considered
borderline AI at best these days. Completely self-driving automobiles
are still seen as Science Fiction and therefore are solidly in the AI
column, but I predict they will NOT widely be considered AI by the time
they are commercially available.  AI is a moving target. By the time a
technology is commercialized it’s no longer considered Artificial
Intelligence.  Consequently, we fickle humans are consistently
underwhelmed by the promise of AI even as AI fundamentally changes the
world around us.

The same is happening in legal right
now.  AI is all over the place from e-discovery to contract review, due
diligence, and data extraction, to my own company’s expert system platform.  (Oh, BTW.  I’ve got another new job since
last I wrote.) But the more we see of it, the less we believe it to
truly represent Artificial Intelligence.  AI is always just beyond the
horizon.  Just on the other side of the next technological
breakthrough.  It’s always something just slightly better than what we
can do right now.

So I say, “Don’t buy into the
AI hype!”  Not because AI is not real, but because hyperbolic
expectations for AI lead to a belief in ‘magical technology’. And
expectations of ‘magic yet to come’ will prevent you from taking
advantage of the remarkable and capable technology that is absolutely available

It’s not ‘Artificial’ Intelligence, it’s Your Intelligence: Augmented, Enhanced, and Multiplied.

This post originally appeared on the HighQ Blog.  

Last week, at the HighQ Forum in London, our new robot overlords displayed their mighty powers and declared that all human lawyers should line up and await their turn at the guillotine. 
Oh wait… I’m wrong, that didn’t happen.
However, we did get a brief glimpse into the future of legal service delivery, with what could arguably be called the first true robot lawyer. 
Yes, it’s a title that has been thrown around quite a bit recently. Both ROSS and KIM have been labeled robot lawyers, but ROSS is a very powerful research tool and KIM is a ‘virtual assistant’, akin to Siri for law. 
Not to in any way diminish either of these technologies, if moderately pressed, I will admit to being a huge fan-boy when it comes to both of them, but I think the term robot lawyer when applied to these technologies has invited skepticism and derision from people who claim that computers simply cannot do what humans can do. 
We set out to do some actual lawyering with computers.
HighQ Collaborate is a platform that allows for easy sharing and communication within firms, or between clients and firms. We may not be not the obvious choice for setting out to create a robot lawyer.
But therein lies the strength of our approach, because our robot lawyer is not a product.  It’s not a creation of a single company. It’s simply a proof of concept to show what is possible when you combine resources and tools that you have at your disposal to create something that is greater than the sum of it’s parts. 
This is a technique I talk about a lot, that I call bricolage.
Bricolage gives you the best of both the Buy and Build options. You are still building a custom solution to solve you particular problem. That could potentially give your firm a competitive advantage.
However, you are also using purpose built tools that are fully supported by other companies to ensure that you have the most robust solution possible. To me, bricolage is the answer to the Buy vs. Build question for law firms.
In February, HighQ announced its integration with RAVN, an AI data extraction tool that allows you to pull specific data out of unstructured documents and to move it into a structured format. 
On June 9th, at our Client Forum, we also announced integration with Neota Logic, a different kind of AI that allows you to build powerful expert systems to replicate virtually any logical process that can be codified.
For the forum I was joined on stage at the British Film Institute on the south bank of the Thames, by Sjoerd Smeets from RAVN and Greg Wildisen from Neota Logic. And as a demonstration of the combined power of our three platforms, we presented a scenario:
Imagine you’re a law firm, and you are approached by a client that is considering acquiring a large number of commercial leases. They want you to help determine the value of these leases over their entire term, as well as identify any risks associated with each lease.
Now, most firms would have two options:
  1. Get a bunch of young lawyers, or contract lawyers, in a room and have them manually plow through the many thousands of leases, calculating the value and highlighting and risky clauses or potential concerns.
  2. Work with the client to identify a subset of leases to review manually, and make a number of assumptions about the rest of the leases in order to provide some likely risks they may face.
But with HighQ, RAVN, and Neota, there is a third option.
Clients will commonly upload a large set of documents into our HighQ Collaborate site. An administrator will then go through the documents, ensuring that they are appropriately filed and then notify (or set auto-notifications to notify) the appropriate lawyers that the documents are out there waiting for some attention. 
In our demo last Thursday, the files were bulk uploaded and then RAVN went to work reviewing the documents.
First it identified the types of documents that were in the zip file. There were 10 commercial shopping mall leases and 5 ISDAs. As the audience watched, Sjoerd from RAVN, hit refresh and nothing happened.
He waited a second, hit refresh again, and nothing happened. He looked back at his laptop that was running the software, which I could see running, and I thought, “NOOOOO!  The curse of the live demo!” I was silently screaming what an idiot I must be for trying to do this live. 
But then Sjoerd hit refresh one more time, and you could see that the numbers were changing. RAVN was moving the files to the Shopping Mall Leases, and ISDA folders that we created. 
Then he clicked over to iSheets, our online spreadsheet/database module, and showed how RAVN was populating the sheet with information from the uploaded documents. First one row of data showed up, refresh, four more rows, refresh, all ten. And with that Sjoerd handed the computer over to Greg from Neota. 
Greg took the stage and showed the app that Neota had embedded into Collaborate. With the touch of one button marked, “Run Lease Assessment” the app performed four tasks for each lease. 
It calculated the portfolio rental value from any given start date, it assessed risks associated with the calculated rental value (such as tenants right for early termination and/or assignment, special obligations on the landlord, conditions around the security deposit, etc). 
Clicking through the app brings you to a valuation summary that shows the total value of the aggregated leases, as well as an aggregate Red Amber Green risk assessment of all leases. In addition, each lease is given its own valuation and risk report and the iSheet is updated with the valuation and risk report. It does all of this in seconds. 
I took the stage again and did my best Steve Jobs impersonation. “That is amazing!” Except, it wasn’t hyperbole, that is actually really amazing. Several people came up to me after and said, “I’m afraid your presentation was too slick, I don’t think that everyone in the audience understood what you three just did there.” 
But enough understood it. And enough can extrapolate to their own use cases and opportunities.  Enough can imagine how they could then use Collaborate to share the results of the AI engines, filtering views of the iSheets and permissioning them for different audiences, the client, the practice group, the contract lawyers, and any others you could think of. 
Each group seeing only the information that is relevant and important to their portion of the work at hand. Enough understood what we did on Thursday that they are beginning to talk, and they are beginning to ask whether we could make this work for their particular use case.   
This robot lawyer does not replace human lawyers. It makes them faster, more efficient, more consistent, and happier. 
Because this robot lawyer tells them where to focus their energies, on high risk leases, or contracts.  The kinds of things that lawyers really want to do, instead of mindlessly slogging through 50 mind-numbing, perfectly normal contracts a day, hoping to find the one anomaly in a hundred contracts. 
This robot lawyer doesn’t replace human lawyers. It makes them better lawyers.

I wrote a post last week in which I called for a moratorium on the term Artificial Intelligence in relation to the law.  Instead I suggested that you should just replace AI with the term Automation because “they’re exactly the same thing, at least as far as the current legal market is concerned.”

Some people took me to task for over-simplifying the issue. Fair. Some seemed to think that I didn’t understand that AI and Automation were separate things, and they helpfully sent me links to Wikipedia pages and dictionary definitions of AI.  Very kind, but unnecessary.  I assure you, I understand the differences.

My underlying point – and admittedly I sometimes meander on my way to getting there, which can cause confusion, consternation, and even anger among my less patient readers – was that, much of what we call artificial intelligence in the legal industry is simply the automation of historically manual processes.  And if we refer to these things as “Automation” instead of “Artificial Intelligence” we are more likely to have intelligent, thoughtful, and meaningful conversations about the future of legal practice, than we are to run screaming through the halls, crying uncontrollably, and tearing out our hair for fear of the robots coming for our jobs.

I know many of you will find this hard to believe, as I usually revel in setting verbal fires, but my goal in abandoning “Artificial Intelligence” as a term was to improve the overall quality of discourse surrounding the use of Artificial Intelligence in the practice of law.

John Alber, former Strategic Innovation Partner at Bryan Cave and current Futurist-in-Residence for ILTA, called me out for over-simplifying.

Arrgh!  Fine. I’m not going to fight with John, he was the first person I ever met and an ILTA conference and, more relevantly, he’s right.  So I suggested an alternative.

But Kenneth Grady, former CEO of SeyfarthLean and current Legal Future Evangelist rightfully suggested:

So I stewed.  Pushed it to the back of my mind and focused on other far less interesting, but more profitable things like work. Until today during lunch, when I was reading an article on Augmented Reality and it suddenly hit me.  The middle ground between Artificial Intelligence and Automation is Augmented Intelligence.

I suggested this alternative to my twitter colleagues and Kenneth Grady helpfully suggested the addition of Human.

And there it was, the term I was looking for.

Augmented. Human. Intelligence.  #AHI

It’s much less hysteria inducing than the term Artificial Intelligence.  And arguably, it’s a more accurate representation of what is happening now in the computer assisted practice of law.

So, today I am calling for a moratorium on the term Artificial Intelligence in the practice of law (unless you are actually talking about a Turing machine that bills by the hour), because whether we’re discussing Watson, Ross, Kira, or Kim, we’re talking about Augmented Human Intelligence.

I am calling for an official moratorium on the term Artificial Intelligence in relation to the law!  Everyone please just stop using it. It’s a needlessly charged word that only confuses and clouds the underlying issues whenever it comes up.

From now on any time you feel the need to use the term Artificial Intelligence, replace it with Automation.  No seriously, they’re exactly the same thing, at least as far as the current legal market is concerned. Whereas, AI carries connotations of ‘robot lawyers’ replacing people, Automation seems friendly, simple, even mundane.  That’s good.  Automation is the future of legal practice.

My friend Ron Friedmann posted a Twitter poll last week that got my hackles up.

Come on people!  Really?  Collaboration software!?  Biggest impact on legal market in next 3 years? Do people even read the question before they start ticking boxes?

Don’t get me wrong, I am a huge fan of collaboration software.  I firmly believe that modern collaboration tools are a fundamental requirement for any modern law firm, akin to a document management system, a productivity suite, and maybe a handful of lawyers.  But the ‘most impact on legal market’?  Tech that has been widely available for 10 years, that everyone is already using, even if IT or firm management frowns on it.  I don’t think so.

The correct answer, and the one that was chosen by a majority of respondents, is Automation.  I know, Automation wasn’t officially a choice, but look at the options again.  AI/Machine Learning and Contract Analytics collectively received 58% of the votes. Contract Analytics is a form of AI/Machine Learning and they should have both been listed as Automation tools.

Woo hoo!  Ron’s readers aren’t dumb, they just got a little confused by the options. Easy to do, when the confounding term AI rears it’s ugly head.

This was all bouncing around in my head yesterday when I saw the following article on Bloomberg BNA.

Another Law Firm Adopts Automation Technology

In the latest sign that more and more legal services are being automated, Akerman has announced it will operate a data center that allows corporate clients to quickly look up data privacy and security regulations without having to consult a human lawyer.

Look at that. The beauty of it. The simplicity. The near total lack of hysteria about robots stealing jobs. And guess what words don’t even appear in the article:  Artificial and Intelligence.

But you know what that article is about?  The biggest impact on the legal market in the next 3 years.

Automation.  Or as I like to call it, the creation of Legal Engines, by Legal Engineers, to automate the practice of law one task at a time.

If only someone had foreseen that such a thing might happen.