One of the highlights of the American Association of Law Libraries (AALL) conference in Austin this year was the Innovation Tournament which pitted three librarians’ tech innovations against each other. With two prizes, each worth $2,500, up for grabs, the competition was pretty tough. There was a scanning project management innovation, a Virtual Reality presentation preparedness tool, and an innovative ChatBot for legal information assistance. The ChatBot really caught my attention as something that I would love to test out on a local level.
We’ve all probably seen ChatBots either on Facebook, or when we go to ask for customer service with our favorite online retailers. The idea is pretty simple in that it basically automates many of your frequently asked questions (FAQs) into an interactive chat session. From requesting a maintenance man out to fix your dishwasher to ordering a pizza, ChatBots are out there to handle repetitive tasks.
Imagine the FAQs for law libraries that a ChatBot could help answer.
- Point to practice groups specific materials
- Pull cases
- Reset passwords
- Identify book locations on a shelf in the office
- [fill in the blank]
|Photo by Charlz Gutiérrez De Piñeres on Unsplash|
What I am about to write is completely anecdotal, but I think is relevant to the current disruption that we are seeing in the legal industry when it comes to automation of legal tasks. I know, most of you are asking, “how does that vary from all your other posts, Greg?” Quite frankly, it doesn’t, but I wanted to warn the readers that this one is my experience, and your mileage may vary.
I want to paraphrase something that I heard last week from a guest speaker at the AALL conference in Austin.
Lawyers don’t like automation of tasks because it cuts into their billable hours, and thus it costs them money.
This is a good line to say to a bunch of law librarians and legal tech professionals, but it’s kind of a cheap line, and in my experience, not all that accurate. It’s a line that has been said in different forms for the twenty years or more that Knowledge Management processes have been hailed as best practices for law firms. Add to that the history of business process improvements, Six-Sigma implementation, and now Artificial Intelligence and Machine Learning, and you’ve got a new platform to tell the story of “the attorneys won’t adopt these ideas because it will cut the time they charge their clients.”
I have to say that I have yet to talk with an attorney that hinted that this was a serious barrier for efficiency. Oh, I am positive that some of you have run into these attorneys, I’m just saying that it has not been my personal experience.
What I have seen, however, is the challenge of implementing these processes and tools into the workflow of the attorney without causing a major disruption, at least initially, in their ability to do the work. Sometimes this disruption lasts for months. Again, I’ll give you another anecdote.
When I was at ILTA last year, I watched an amazing presentation from some very forward thinking lawyers who created automation tools and machine learning techniques to process a type of transactional documents. The outcome was pretty amazing, and reduced the time to process documents down from dozens of hours down to a few minutes or hours. Plunk in the data… press the “go” button, and watch the machines do the work. The idea was to make the lawyers focus on what they are really good at, and that is dealing with the highest risks the clients face, and not waste time on no-risk, or low-risk items in the portfolio. Lawyers could then charge an alternative fee deal that still made them a nice profit, but at the same time, reduced the clients overall spend. On top of all of that, it also sped up the time spent on the matter.
Now, you might read that last part and say, “I can see why lawyers would refuse to do that. It cuts their own throats by making less revenue.” That sounds like a solid interpretation. However, let me add in one more detail to the story which I got after the presentation when I asked the presenters this question. “How long did it take you to automate this one type of deal, and how many people did it take to get it operational?”
The answer was that it took six to eight months, four or five consultants and programmers, and two or three attorneys who could test the system as it was being created, and give feedback. That was for one type of deal. I don’t think I’m going too far out on a limb here to say that the cost of this was probably in the mid six-figure range or higher.
Granted, the first item brought to market is the most expensive, and it is very possible that the next type of deal would only take a few weeks to bring online, and a diminishing amount of time for the next deal type, and the next deal type. How many law firms are going to take this risk with the upfront costs in the hopes that eventually they will get a return on their investment?
So let’s get back to the idea that lawyers don’t like automation because it costs them billable hours. I think that the real answer is that most lawyers don’t like automation because the change is too costly, both in time and money. High risk can mean high reward, but it is still a high risk.
Perhaps the story I’m using here is a situation where we attempt to do too much all at once. I’m a big believer that law firms don’t lack for resources which improve overall efficiency. What they actually lack is actually applying the existing resources they have. Instead of looking at the latest bleeding-edge technology that promises reducing months of time to seconds, look to the tools you’ve already bought that will reduce ten minutes off an hour of work. It’s not as cool, but it is more likely to work.
One of the best things that sometimes happens at professional conferences is when a speaker says something that makes you sit up and pay attention. That happened to me at the British Irish Association of Law Libraries (BIALL) meeting earlier this month when Nick West, Chief Strategy Officer for Mishcon de Reya LLP, paraphrased chess grand master Garry Kasparov and stated:
An intermediate chess player and a computer beats a Grand Master chess player with a computer.
Now for a little background on why that made me sit up and pay attention.
There’s been so much buzz about how Artificial Intelligence (AI) is going to replace so much of the legal industry, that it is overshadowing something that we’ve known in practice for over twenty-five years. Lawyers and technology have a symbiotic relationship that will continue to deepen as technology advances. Yes, some tasks that are now conducted by lawyers will eventually be completed by technology, but there will also be tasks that at this time can only be conducted by highly trained and experienced lawyers, that will someday soon be accomplished by intermediately experienced lawyers with the assistance of advanced technology. This isn’t some amazing claim; it has happened already, and will continue to happen into the future.
I talked with Nick West after his presentation, and he pointed me to an article from Deloitte authored by Jim Guszcza, Harvey Lewis, and Peter Evans-Greenwood entitled, Cognitive Collaboration: Why Humans and Computers Think Better Together. In the article, the authors discuss that the concept of AI mimicking human thinking is a distraction from where the real value in “the ways in which machine intelligence and human intelligence complement one another.” The concept that they believe needs more focus than AI is Intelligence Augmentation where “the ultimate goal is not building machines that think like humans, but designing machines that help humans think better.”
I won’t rehash what Guszcza et al, write in their article, but I do want to go back the Kasparov statement mentioned earlier, and how I see this applying to the legal field that could mean that an average or intermediate skilled lawyer and technology could be better than an expert lawyer, even if that lawyer had the same technology. First of all, let me state the actual quote from Gary Kasparov from his New York Review of Books article, “The chess master and the computer.”
Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.
Kasparov came to this conclusion after he held a “freestyle chess” match where any combination of computer and human chess players were allowed to compete. The result was that a couple of American amateur players using three computers simultaneously won over grand masters, even when the grand masters used state-of-the-art computers to aid them. Kasparov evaluated the amateurs won because they were “skill[ed] at manipulating and ‘coaching’ their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants.” In other words, they understood how to leverage the computers’ to perform freestyle chess, and that skill proved greater than the skills needed to be a chess master playing freestyle chess.
Will the same concept apply to the legal field? I would say that the probability is very strong. The skills needed for the future lawyer, aren’t necessarily the skills needed for an expert big law partner today. Those lawyers* who understand the legal concepts and applications and apply the proper technology using a superior process can be better, faster, and cheaper than the highly skilled lawyers who cannot or will not leverage that technology, or who cannot understand how to effectively leverage the application of that technology.
I would say that this is not a future concept, but rather one that arrived in the legal profession years ago. As William Gibson famously said, “The future is already here – it’s just not evenly distributed.” There are already a number of “average” lawyers using technology and effective processes who are already out there. As the technology improves and helps additional average lawyers think better, it is going to be harder for expert lawyers to prove their value. The skills needed to win in Freestyle Legal Practice will be held by those that understand the roles played by the lawyer, the technology, and the process.
*The fact that I got this idea from attending a law library conference may mean that it’s not just the average lawyer who may be the only ones able to leverage these skills.
Roommate needed: Must like to watch anime, wake me up with a good morning, let me know if I need an umbrella for work today, text me during the day, turn on the lights as I’m coming home, and tell me good night as I drift off to sleep. I’m looking for someone with blue hair and a cute dress, and comes in her own 12 pound sealed container. I’ll pay up to $3,000 to have you shipped from Japan, and will wait up to a year for you to arrive. Because, apparently, I’m extremely lonely.
Okay… I haven’t quite decided yet whether this is cool, or creepy. It’s probably both. Actually, it’s a lot of both. But, it is probably the next phase in integrated home technology and personal interactive technology. The video below shows the lonely Japanese professional and how his day is brightened by the sweet robot’s comments, suggestions, tweets, and courteous actions throughout the day. Once you get passed the weirdness of it all, I think that you can see the attraction (both in a good way, and in a bad way) of how this type of technology will advance the likes of what we have with Apple’s Siri, and Amazon’s Alexa voice response systems.
The system is being presold now (around $3,000 with shipping), and as of last week there have been over 200 units sold. A more extensive, 27 minute explanation and demonstration is available from Tokyo Otaku Mode’s Facebook page.
Creepy or not, I think this is the first of many virtual holographic personal assistants to come.
Ron Friedman recently posted the following video to twitter.
My musings on the legal market hype about artificial intelligence (AI). ISO help to understand that hype. pic.twitter.com/p1aymC71xF
— ronfriedmann (@ronfriedmann) November 20, 2016
Ron and I have talked about this a lot, going back to my AI posts last December when
I suggested that we stop using the term Artificial Intelligence in
legal because it causes more confusion, consternation, and general
trouble than it’s worth.
First, to answer Ron’s
question, why all the AI hype in the legal market? The AI hype isn’t
happening in the legal market. It’s happening throughout the world.
It’s now in our homes with Nest Thermostats and Hue light bulbs. It’s in
our pockets with Siri, and in our offices with Alexa. It’s the basis of
one of the most engrossing shows on HBO right now, Westworld. And we
still have brilliant people like Elon Musk and Stephen Hawking warning
that AI will likely kill us if we don’t take precautions. What we’re
seeing in the legal market is just bleeding-through from the massive
hype happening in the rest of the world. And I think it’s all about to
come crashing down. We will shortly enter into the great Trough of
Disillusionment for AI.
I don’t say that because I
think AI will fail to live up to its promise. On the contrary, I think
AI will way outstrip our current expectations. However, we humans are
fickle. Our expectations shift quickly. Louis C.K. explains it best in
his routine about Airplane WiFi.
In the AI space, this same fickle attitude leads to an interesting
phenomenon, over time we adjust what we believe qualifies as AI. The
more common a technology becomes the less we believe it to be Artificial
Google isn’t considered AI, but it
‘knows’ what you’re typing as you type, and then it filters a large
portion of the web to give you the most relevant pages. It would have
easily been seen as AI twenty years ago. Siri and Alexa personal
assistants respond to voice commands and can return information
instantly or actually perform tasks online, but they are considered
borderline AI at best these days. Completely self-driving automobiles
are still seen as Science Fiction and therefore are solidly in the AI
column, but I predict they will NOT widely be considered AI by the time
they are commercially available. AI is a moving target. By the time a
technology is commercialized it’s no longer considered Artificial
Intelligence. Consequently, we fickle humans are consistently
underwhelmed by the promise of AI even as AI fundamentally changes the
world around us.
The same is happening in legal right
now. AI is all over the place from e-discovery to contract review, due
diligence, and data extraction, to my own company’s expert system platform. (Oh, BTW. I’ve got another new job since
last I wrote.) But the more we see of it, the less we believe it to
truly represent Artificial Intelligence. AI is always just beyond the
horizon. Just on the other side of the next technological
breakthrough. It’s always something just slightly better than what we
can do right now.
So I say, “Don’t buy into the
AI hype!” Not because AI is not real, but because hyperbolic
expectations for AI lead to a belief in ‘magical technology’. And
expectations of ‘magic yet to come’ will prevent you from taking
advantage of the remarkable and capable technology that is absolutely available
It’s not ‘Artificial’ Intelligence, it’s Your Intelligence: Augmented, Enhanced, and Multiplied.
This post originally appeared on the HighQ Blog.
- Get a bunch of young lawyers, or contract lawyers, in a room and have them manually plow through the many thousands of leases, calculating the value and highlighting and risky clauses or potential concerns.
- Work with the client to identify a subset of leases to review manually, and make a number of assumptions about the rest of the leases in order to provide some likely risks they may face.
I wrote a post last week in which I called for a moratorium on the term Artificial Intelligence in relation to the law. Instead I suggested that you should just replace AI with the term Automation because “they’re exactly the same thing, at least as far as the current legal market is concerned.”
Some people took me to task for over-simplifying the issue. Fair. Some seemed to think that I didn’t understand that AI and Automation were separate things, and they helpfully sent me links to Wikipedia pages and dictionary definitions of AI. Very kind, but unnecessary. I assure you, I understand the differences.
My underlying point – and admittedly I sometimes meander on my way to getting there, which can cause confusion, consternation, and even anger among my less patient readers – was that, much of what we call artificial intelligence in the legal industry is simply the automation of historically manual processes. And if we refer to these things as “Automation” instead of “Artificial Intelligence” we are more likely to have intelligent, thoughtful, and meaningful conversations about the future of legal practice, than we are to run screaming through the halls, crying uncontrollably, and tearing out our hair for fear of the robots coming for our jobs.
I know many of you will find this hard to believe, as I usually revel in setting verbal fires, but my goal in abandoning “Artificial Intelligence” as a term was to improve the overall quality of discourse surrounding the use of Artificial Intelligence in the practice of law.
John Alber, former Strategic Innovation Partner at Bryan Cave and current Futurist-in-Residence for ILTA, called me out for over-simplifying.
@eDepoze @rmcclead @Vizlegal #Automation goes too far the other way. It’s a 40s term applicable to a wide range of noncomputing processes.
— John Alber (@johnalber) December 7, 2015
Arrgh! Fine. I’m not going to fight with John, he was the first person I ever met and an ILTA conference and, more relevantly, he’s right. So I suggested an alternative.
I.A.: Intelligent Automation https://t.co/qYTrAMZiBP
— Ryan McClead (@rmcclead) December 7, 2015
But Kenneth Grady, former CEO of SeyfarthLean and current Legal Future Evangelist rightfully suggested:
Nah, everyone would think it was a typo. https://t.co/BIO42KijtV
— Kenneth A. Grady (@LeanLawStrategy) December 7, 2015
So I stewed. Pushed it to the back of my mind and focused on other far less interesting, but more profitable things like work. Until today during lunch, when I was reading an article on Augmented Reality and it suddenly hit me. The middle ground between Artificial Intelligence and Automation is Augmented Intelligence.
I suggested this alternative to my twitter colleagues and Kenneth Grady helpfully suggested the addition of Human.
What about AHI (Augmented Human Intelligence)? Avoids AI confusion & emphasizes role of humans. https://t.co/E5cMhs0CNP
— Kenneth A. Grady (@LeanLawStrategy) December 8, 2015
And there it was, the term I was looking for.
It’s much less hysteria inducing than the term Artificial Intelligence. And arguably, it’s a more accurate representation of what is happening now in the computer assisted practice of law.
So, today I am calling for a moratorium on the term Artificial Intelligence in the practice of law (unless you are actually talking about a Turing machine that bills by the hour), because whether we’re discussing Watson, Ross, Kira, or Kim, we’re talking about Augmented Human Intelligence.
I am calling for an official moratorium on the term Artificial Intelligence in relation to the law! Everyone please just stop using it. It’s a needlessly charged word that only confuses and clouds the underlying issues whenever it comes up.
From now on any time you feel the need to use the term Artificial Intelligence, replace it with Automation. No seriously, they’re exactly the same thing, at least as far as the current legal market is concerned. Whereas, AI carries connotations of ‘robot lawyers’ replacing people, Automation seems friendly, simple, even mundane. That’s good. Automation is the future of legal practice.
My friend Ron Friedmann posted a Twitter poll last week that got my hackles up.
Which tech will have most impact on legal market in next 3 years?
[Reply to suggest other choices for next poll]
— ronfriedmann (@ronfriedmann) November 26, 2015
Come on people! Really? Collaboration software!? Biggest impact on legal market in next 3 years? Do people even read the question before they start ticking boxes?
Don’t get me wrong, I am a huge fan of collaboration software. I firmly believe that modern collaboration tools are a fundamental requirement for any modern law firm, akin to a document management system, a productivity suite, and maybe a handful of lawyers. But the ‘most impact on legal market’? Tech that has been widely available for 10 years, that everyone is already using, even if IT or firm management frowns on it. I don’t think so.
The correct answer, and the one that was chosen by a majority of respondents, is Automation. I know, Automation wasn’t officially a choice, but look at the options again. AI/Machine Learning and Contract Analytics collectively received 58% of the votes. Contract Analytics is a form of AI/Machine Learning and they should have both been listed as Automation tools.
Woo hoo! Ron’s readers aren’t dumb, they just got a little confused by the options. Easy to do, when the confounding term AI rears it’s ugly head.
This was all bouncing around in my head yesterday when I saw the following article on Bloomberg BNA.
In the latest sign that more and more legal services are being automated, Akerman has announced it will operate a data center that allows corporate clients to quickly look up data privacy and security regulations without having to consult a human lawyer.
Look at that. The beauty of it. The simplicity. The near total lack of hysteria about robots stealing jobs. And guess what words don’t even appear in the article: Artificial and Intelligence.
But you know what that article is about? The biggest impact on the legal market in the next 3 years.
Automation. Or as I like to call it, the creation of Legal Engines, by Legal Engineers, to automate the practice of law one task at a time.
With all this talk or blogging about AI, Big Data, metrics and analytics, pricing protocols, KM, Six Sigma and Lean and Agile, I wonder if I am working in a manufacturing shop or a law firm. In the world of manufacturing widget A can be compared to widget B, the two widgets can be taken apart, reverse engineered, put under stress tests and compared one against the other down to their composite parts. But if you’ve ever done what I call the website practice description test, you will know that law firms all use eerily similar language, nuance and style to describe what they do and for whom they do it. And yet, each law firm is unique, there is something that makes one firm embrace AI or LMP while another will shy away from anything other than the billable hour. What then is the *real* differentiating factor for law firms? Culture.
Success or failure of the firms to be commercially successful, embrace or refute technology, encourage new management roles and processes is, in my mind all tied to culture. One culture does not necessarily suggest success and the other failure but the ability of a firm to pursue its quality of excellence – however they define and measure it – rests solely on its ability to maintain its cultural balance in every interaction. Little gestures such as ending emails with “Smiles” or “no response required” or offering clients use of a firm’s meeting spaces or larger firm discussions around collaboration, sharing of financial data within the firm or making use of the Cloud in technology initiatives each point to the culture of the firm and reinforce for partners, clients, staff and business partners what a firm ultimately privileges. I have often wondered how it could be that laterals who were floundering at one firm move to another and are suddenly rainmakers or lauded as being the best of the best in the business or how one firm can implement a new software tool at a significant cost while others wouldn’t touch that same tool even it was free. The answer is of course “fit” or culture. Unlike in manufacturing where the goods produced undergo strict quality assurance testing, is it the people who work in firms each and every day that offer up the defacto QA testing, turning ISO (certification) into IMHO.