[Ed. Note: Please welcome guest blogger, Colin Lachance, CEO of Compass/vLex Canada. – GL]
At no fewer than four conferences this lovely month of May, I will be speaking about artificial intelligence in law. Each event has a different focus (regulation, impact, libraries, family law), but as my comments in each will spring from my personal framework for considering issues, opportunities and implications, I thought it might help me to advance that framework for your feedback.
EI take my title for this post from the 1962 French documentary “Le Joli Mai“, in which the thoughts and opinions of Parisians on various topics were gathered through a series of interviews, and presented without filter. Seems to fit what happens at most conferences as well as my desire to hear your thoughts and opinions on how we should be thinking about AI in law.
We are all learning the lingo of AI: Natural Language Processing, Deep Learning, Machine Learning, Expert Systems, Neural Networks, Cognitive Computing, inference Engines, Analytics (descriptive, predictive, proscriptive) and so on. It’s easier than ever to sound conversant on the issues, but shared language doesn’t necessarily lead to a shared understanding or means of evaluating claims of AI use or achievement.
In a recent twitter chat, @placejudicata (Judicata CEO, Itai Gurari) directed me to an important observation of Michael Jordan (not that one, a different one) concerning the lack of shared understanding for what we mean when we say “AI”:
…the use of this single, ill-defined acronym prevents a clear understanding of the range of intellectual and commercial issues at play
My personal framework for compartmentalizing the range of intellectual and commercial ideas concerning discussions around artificial intelligence in law may not have utility to anyone but me. But when I read, or hear, or share a thought on the subject, I mentally categorize the thought into one of these buckets:
The point of the framework is to have a context in which to evaluate the matter before me. Are we talking about a feature inside a particular piece of software? Are we talking about research? Are we talking about a commercial market? Are we talking about social policy? Lines may blur and categories may overlap, but for purposes of putting all participants to a conversation on the same page or as bullshit detector, I’ve found it useful.
Starting with bleeding edge, consider that for every technology researcher somewhere out there focused on law and working at the bleeding edge of AI, there are likely 1000 people focused on domains other than law. Bleeding edge legal AI researchers exist, but these aren’t the people leading the public conversations about AI in law or selling the AI-driven products. In fact, in “legal AI”, the true bleeding edge work that occupies us isn’t at tech discovery or creation at all, and it’s certainly not at marketing, it’s at figuring out the legal and societal implications of a world where decision-making is increasingly outsourced to algorithms.
While there is some activity among a very small number of legal tech companies and institutions (new and established) at the leading edge / applied stage, a suspiciously large and rapidly growing number of legal tech organizations claim to be at that edge. However, those players are better understood as building ideas based on the commercialized and open source wealth of knowledge and capabilities spun out by groups like Google, Amazon, IBM, Facebook and research institutions around the world. And even at this stage, we see that despite the presence and impact of many true innovators in this space, “legal AI” is still small compared to other sectors and often does not warrant identification as an industry vertical by those that monitor the thousands of AI companies currently on the scene.
Steps 1, 2 and 3 notwithstanding, my framework is neither a continuum, nor a process for any particular organization or technology. Think of it more as a point-in-time overlay. To me, at least, this becomes clear when you start to think about how to organize the ideas and entities we read about in the legal press, hear about at legal conferences and look to incorporate into legal practice. There is often a divergence between the marketing language of bleeding/leading edge and the reality of what is in effect application of “off the shelf” technology to a business challenge.
For example, a company that creates an “AI solution” to a legal practice challenge will likely have built that solution on the commercial and open sourced tools and ideas available to them. They will need to productize it to get their first client and build a market for their solution, and, ideally, evolve the experience so the user begins to consider it like any other frequently-used and must-have piece of software. Sure, they need to evolve the back-end as well, since the scope of commercialized and open source tools and ideas available for other companies and clients themselves to build is constantly expanding as a result of the research taking place beyond legal. The point being that while their language and plans may be in different buckets, these companies are still mostly focused on convincing potential customers to change the way they operate to incorporate a new product of service.
It’s already a little overwhelming, and we’ve barely cracked the scope of practice improvements that are possible with the tech that is readily available. As one AI-focused investor has suggested:
If we stopped all of the AI research right now, there’s at least 20 years of fruitful activity where we take the techniques we already know and put them in business applications.
This insight should make apparent to potential users of AI-driven products and software that there is a whole lot of potential innovation ahead at the productized and software buckets. To that I would add, because the biggest barrier to “productized” solutions is behavioural change at the client level, in-house and highly customized business applications are already in reach and will take on a much bigger part of the legal AI conversation this year.
By looking at what a business or a client actually does, and the challenge they actually want to solve, it demystifies a fair bit of the “AI” language used and makes it easier to figure out where that business or client needs to focus to make their objective a reality.
* * *
How do you make sense of “legal AI” discussions?
How would you suggest I improve my approach?