Many of us have wondered when the big two legal information providers would jump into the Generative AI game, and it looks like LexisNexis is going public first with the launch of Lexis+ AI. We sat down with Jeff Pfeifer, Chief Product Officer for UK, Ireland, Canada, and US, and discuss the launch and what it means for LexisNexis going forward.
Pfeifer discusses how LexisNexis+ AI offers conversational search, summarization, and drafting tools to help lawyers work more efficiently. The tools provide contextual, iterative search so users can refine and improve results. The drafting tools can help with tasks like writing client emails or advisory statements. The summarization features can summarize datasets like regulations, opinions, and complaints.
LexisNexis is working with leading AmLaw 50 firms in a commercial preview program to get feedback and input on the AI tools. LexisNexis also launched an AI Insider Program for any interested firms to learn about AI and how it may impact their business. There is definitely demand for the AI Insider Program with over 3,000 law firms already signed up.
Pfeifer emphasizes LexisNexis’ focus on responsible AI. They developed and follow a set of AI principles to guide their product development, including understanding the real-world impact, preventing bias, explaining how solutions work, ensuring human oversight and accountability, and protecting privacy.
Pfeifer predicts AI tools like LexisNexis Plus AI will increase access to legal services by allowing lawyers and firms to work more efficiently and take on more work. He also sees opportunities to use the tools to help pro se litigants and courts. However, he cautions that the responsible and ethical development and use of AI is crucial.
Overall, Pfeifer believes AI will greatly improve efficiency and capacity in the legal profession over the next 2-5 years but that responsible adoption of the technology is key.
Listen on mobile platforms: Apple Podcasts | Spotify
Marlene Gebauer 0:07
Welcome to The Geek in Review, podcast focused on innovative and creative ideas in the legal industry. I’m Marlene Gebauer.
Greg Lambert 0:14
And I’m Greg Lambert. So Marlene, I think most of us in the legal tech world have wondered when the big two legal information providers are going to launch their own version of some generative AI tool or interface. And right now it looks like Lexus IS jumped into the fray. They just announced last week the Lexis AI commercial preview, and their AI insider program.
Marlene Gebauer 0:40
Exactly. And we’re happy that we got a very old friend from Lexis to come back on the show to tell us more about the preview and the insider program. So we’d like to welcome Jeff Pfeifer Chief Product Officer, Canada, UK, Ireland and USA for LexisNexis. Jeff, welcome back to The Geek in Review.
Jeff Pfeifer 0:58
Thank you, Marlene, it’s great to be back with you today.
Greg Lambert 1:01
So I think when she said old friend, she meant longtime friend.
Marlene Gebauer 1:06
Should I revise? Let’s revise that time longtime
Greg Lambert 1:10
longtime friend, longtime friend. So So Jeff, I looked it up? And it’s been a while since you have been on the podcast endemic? Yeah, so it was November of 2018, if you can believe it. And at that time, we talked about how you wanted to move at the speed of a startup there at Lexis. And so I imagine with the changes, you know, just with the speed of change lately, you know that that type of attitude is more needed than than ever my, my right next assumption there?
Jeff Pfeifer 1:43
Well, it seems ironic that we were talking about that in 2018. And the last year in particular, has seen technology changes more broadly, that are just incredible. And I think probably pointed at the legal market, more so than technology changes typically are. So that definitely requires organizations like ours to be really nimble and make sure that both we’re monitoring for those kinds of technology changes, but more importantly, studying the challenges that our customers are likely to face because of that new technology. So, in fact, yes, I would say we’re moving faster than we’ve ever moved before. And that means we’ve had to make some changes to the way we build and develop product in order to move at that speed.
Greg Lambert 2:28
Yeah, I can imagine. And speaking of speaking of change, I’ve noticed a change in your title lately. So we’re are now you’re you have an expanded role in the UK, Canada and Ireland. So what prompted that?
Jeff Pfeifer 2:39
Well, we saw a big opportunity to connect the work that was happening in our own organization across multiple legal markets. And as I’ve been participating in work in each of those markets, I see more commonality than was previously thought to be the case across those different legal markets. And that’s really influenced the way we build and develop product, we do see the markets approaching questions differently. And maybe the pace of change in each of those markets does vary. In the UK, for example, you know, we see different approach to the way information is managed and commitment to knowledge management programs has a different profile than it does among some US law firms. For us, that has meant we get some insight into what is important to our clients in different phases. And that helps us address development opportunities in each of those markets and in other legal markets around the world by taking all of the information and the feedback we have from those countries, and considering it as we make product development choices. So
Marlene Gebauer 3:51
LexisNexis recently announced the launch of Lexis+ AI, a generative AI platform for legal work. So what are you doing with the generative AI? And how does that differ from the other types of AI that you’ve been using over the past few years?
Jeff Pfeifer 4:05
Well, it’s really interesting. And I think generative AI is probably the single biggest technology development that seems perfectly aligned with law firms and the way law firms work. And I say that because it has at its most basic, functional capability, in my view, two key benefits. One is that it creates a conversational opportunity for a human to interact with a service. And number two, it it creates efficiencies in drafting. And both of those fundamentally are really close to how a lawyer works outside of interacting with products. There is a lot of one to one human interaction with colleagues to say how would you handle this situation and with the ability to ask contextual follow ups? That’s always been a problem in search. When you Search, you lose your, your contextual history of that interaction. And then when drafting, obviously, there is an opportunity to speed, the initial draft of various types of documents. So at its most basic level, we see the technology as being really tightly aligned with two key issues and, and opportunities in in the legal market. I’m really struck by recently some broader studies that have been done that suggests the application of generative AI could be more foundationally, impactful than the productivity gains that we saw during the Industrial Revolution. And that that’s a huge statement. So I want to acknowledge that. But a study released last week by MIT suggested that workers using generative AI were most likely to see productivity gains of 37% or more. So I mean, imagine in a law firm, if you said a lawyer was 37%, more productive than they are now, that has huge implications for the business model of that firm. And a similar study published recently by Stanford found a similar Dane about a 14% productivity increase for customer service workers that were interacting in client interaction. So whether you believe the low end 14%, or the high end, 37%, it strikes me that most law firms wouldn’t know what to do today with a productivity gain between 14 and 37%, in their professional workers stack. So you know, to me, that is really a huge opportunity. And most of the clients we talk to suggest, it’s either going to create increased capacity for the firm, to take on more work, or it’s going to allow them to move up the value chain in the type of legal service support that they can offer to clients. And I think that’s really exciting. And that’s one of the reasons we chose to plunge headfirst into this space, because we think we can enable firms that are moving along that journey, and do so in a very secure, responsible way.
Greg Lambert 7:14
Yeah, yeah. I’ve mentioned this a couple of times, we had Darth Vaughn from Ford, and a few weeks ago, and that was one of the things that he said, was, yes, it’s gonna, it’s going to increase productivity. And that may scare the attorneys with the current business model that law firms have by the billable hour. But Doris response was, says, look, I’ve got so much more legal work that I would love to have you work on and fix in the dress. But I don’t have the time or the money currently to do that. And if you can figure out a way of increasing more capacity, we will fill that capacity. And so it’s I think it’s just a matter of, you know, we started seriously talking about killing the billable hour in 2008. In May, maybe even slightly before that. But it’s never really gone anywhere. I would be surprised if this doesn’t, you know, if this isn’t the big change factor that law firms need and the buyers of legal services need to look at how, how they’re paying in selling legal services. So Jeff, let me let me ask you kind of a big question here about how Lexis is approaching this. So how are you viewing both the benefits of generative AI tools and challenges of AI tools in the legal industry? I know we just talked about our business model may be challenged. What other things are you seeing out there opportunity to serve challenges?
Jeff Pfeifer 8:54
Let’s start let’s start actually on the challenge side, because while there is a lot of hype and excitement around application of the models, out of the box, in the legal world, in particular, models face some really significant challenges. And those are challenges that we have to work to address. And the most notable are obviously hallucinations, which are well documented issues. In the large language space, a lack of citation and legal authority reference is fundamentally a problem in Large Language Models out of the box. The most important challenge that I see is the exposure to certain types of content. Generally speaking, most Large Language Models expose their collection activities to public source content about twice a year. And so if your listeners are thinking about how do I know what content is underneath a service like ChatGPT, or some other large language model, again, those models weren’t developed for specific use cases in the legal market, and the content that is infused into those models is generally updated twice a year. In the legal profession. Of course, we can’t rely on data that is six months out of date, that wouldn’t be possible, you couldn’t ask a question, for example, like the state of law in a specific jurisdiction if the information in the model is dated by six months. So, you know, those are three big areas that that we have sought to focus our energy and effort to solving those problems. And we’ll come back to security in just a minute. But in each of those three areas, we’ve taken a technology approach to reduce hallucinations, to ensure that when citations are placed, they’re accurate and real. And that Thirdly, we expose our models to updated content at the pace customers would expect from LexisNexis. And the way we do that is by marrying technology, from the large language model with our search capabilities. And by marrying those two technologies together, we can minimize hallucinations, we can ensure that citations that are exposed are real, and that the current content is exposed to the model so that when an answer is created, it’s based on highly valuable current legal content. There’s a lot more to it than that in the actual technology setup. But generally speaking, there is a value in connecting Large Language Models technologies with search, for example, that’s what we see Microsoft doing by connecting Large Language Models to Bing, that by connecting search and Large Language Models, you get better quality answers and better quality model performance.
Greg Lambert 11:45
I’m just curious. on the hardware side of things, are you struggling to keep up with the with the hardware requirements that are going to be needed to keep on top of this?
Jeff Pfeifer 11:58
In short, I think the biggest benefit in the entire Large Language Models space in the last six months was the ability to deploy models through public commercial cloud. And the partnerships that have developed between Large Language Models, suppliers, like open AI and Microsoft and other model providers and AWS. By doing that, we see tremendous scaling benefits that come from those cloud providers. And in fact, cloud providers like AWS have worked to improve the technical configuration of the deployment, so that we can effectively manage the cost and the performance of the model in that cloud infrastructure. And that is a benefit that didn’t exist six months ago. Again, the timelines here are really crazy, they’re moving fast. But by seeing these partnerships emerge between language model providers, and the commercial cloud providers, we’re also seeing a substantial focus on improving the costs to deploy these models, and the actual speed and performance of the models. And I think both areas are only going to continue to improve over time,
Marlene Gebauer 13:10
I want to sort of explore a little bit more about, you know, the benefits of what AWS can provide in terms of Large Language Models and law firms, with the Lexis+ AI ability to leverage legal data in a way that creates trust on the user side of things, you know, as well as these comprehensive results that we’re looking for. But you know, that are also verified, and backed up by citations that you mentioned.
Jeff Pfeifer 13:38
So I mentioned earlier, the benefits that come from those cloud providers related to technology, configuration, speed and performance. But perhaps the biggest benefit is actually the way the model is configured and deployed. So in our relationships with major cloud providers, we deploy models in a way that they’re always walled. And by that, I mean, interactions with our models are unique to our users. Those interactions never feed the core model that the model is built on. And third parties never have access, either to the data or the interaction with the model. I mentioned security and privacy earlier. That was the single biggest theme that we heard as we talk to clients all around the world that they wanted to understand how do we make sure that any information we share with your product is unique in your product experience? How do we make sure that anything we feed to the model is not consumable by someone else, and that the information that that we share is always held either at our account level or even at the individual user level. And so those were the bars that we saw to exceed in the way we implemented the models and and that means that again, our models are walled. They are only available to our customers. They never feed back to them. The original model itself, and the user is in control of deciding whether their history is retained or purged at the end of a user session. And again, all of those are control elements that we think are very important to drive adoption, ultimately, in the legal market.
Greg Lambert 15:17
Yeah, I think all of us are butting heads with our security chiefs at law firms right now, telling us to go slow, you know, slow down, we want to make sure that we don’t let any information that that shouldn’t be outside of our control, to get outside of our control. And I and I know that we’ve been trying to do this balance between data security and privacy for our clients, but also trying to increase efficiency and take advantage of the tools. So is there a few things what are what is the the Lexis+ AI doing to ease some of those fears that, you know, so that client information doesn’t get outside of the firm? And are there encryption or privacy technologies that you’re using to enhance that?
Jeff Pfeifer 16:06
Definitely. So I would encourage your listeners to think about interactions with AI models as a circle, if you will, and put a line down the middle of that circle. And on the right side of that circle is the user interaction with the application. And we’ve built our model in a way that everything that happens on that side of the circle is limited to that user experience, it’s not consumed by the model is retained by the model. It’s intended to improve, answer and answer quality or draft quality for the user interacting with the model. On the left side of that circle, again, with the line down the middle is the back end of the model in what we do to improve answer qualities and their, we control those activities we don’t consume from our user activities to improve the model experience or the model quality. And so what we have philosophically chosen is a hard line down the middle to ensure that user interaction stay as user interactions. And model tuning and performance is something that we’re working on on the back end, if you will. And I think that’s, you know, really a critical distinction. And one that we know, our obligation as we talk with clients, especially large law firms, is they want to understand that security configuration, so that they can also be confident that nothing they do when they’re interacting with their model is preserved on our side. And so that is, in fact the case. But we also have to visually share that provide diagramming that illustrates how that works and make sure that people feel comfortable with it. Because personally, I think that is again, likely to be a big barrier to adoption, if people are not comfortable with how those interaction experiences work. Similarly, there are huge IP issues. And these are related issues to what you described, Greg, on the IP side, for example, our models are only consuming content that we fully own. So that means, for example, that it doesn’t include news content from third parties that we work with, like the New York Times and the Wall Street Journal, because there are limited ways for us to control on the back end the consumption of that information into the model at this stage, we think we’ll be able to fix that over time. But for now, that means that our models and the answers we provide are limited to the data that that we control and that we own. And it’s a it’s a critical way that we’re focusing on ensuring that the model not only is respective of the individuals concerns about privacy, but also the third parties that we work with and the copyrights that they own. And we ensure that those are respected and not consumed and retained in a core model experience.
Marlene Gebauer 18:56
And I would imagine just having sort of opening it up to public information, you’d have the same sorts of concerns as well, in terms of you know, copyright and ownership.
Jeff Pfeifer 19:07
Absolutely. And I think all of us, you don’t have to look far to read articles where assertions are being made of copyright violation by consumption to the core model. And again, I would I would suggest for your listeners that that’s really important, as you’re thinking about experimentation sandboxes. And ideas like that is that it is critical to be focused on what kind of API you’re deploying in your organization. And does it have a barrier between the interaction and the core model experience? Most public API’s from model providers don’t have that restriction. So if you’re uploading copyrighted content, it is likely being consumed and retained in the model that you’re interacting with. You can build protections against that but they require a separate step. That usually means deploying those API’s through A larger cloud vendor again like Microsoft Azure.
Marlene Gebauer 20:03
So Jeff, in the press release, you say that Lexis+ AI offers conversational search summarization and drafting tools. So can you give us some examples of how these tools work and how they can help lawyers in their daily tasks?
Jeff Pfeifer 20:17
Absolutely. So let’s start with search. I mean, this is something that I’ve studied for a long time I’ve, I’ve worked in search for a good chunk of my career. And one of the biggest complaints about search has always been the loss of context. So you ask a first question, and it’s not exactly the right answer. But you don’t have a way to tell the search engine. Well, that wasn’t exactly right. I was really more interested in this area. This is what I described earlier in our conversation is one of the things that represents huge upside, I believe in the implementation of Large Language Models, because the Large Language Models does retain that contextual interaction. So if you ask a question about a legal concept, or an idea, but it’s not exactly right, you can ask the service to refine the answer. And so when I describe conversational search, it is that idea that the user will be able to refine the quality of the answer by asking follow up and clarifying questions. And I see that as potentially revolutionary to legal research, especially for more junior researchers who are learning the process to refine and qualify the answers that they’re being asked to deliver. So I think a huge benefit that we’re seeing in early days is that users really value that ability to have an iterative interaction in drafting. Similarly, when we talk to clients, we heard clients describe all types of drafting from, you know, hugely complex, like a really complex merger agreement. But most of the drafting was answering client emails, or providing an advisory statement on some regulatory development, where a quick briefing was needed and could be sent to clients. Those are use cases that we think are very relevant for Large Language Models, generative text production. But again, the answers have to be high quality. And if you’re sending it to in house counsel, they’re likely going to want information on the legal authority, the regulatory reference, perhaps some other information. And that’s a space that we think we can do really well and really effectively. And then lastly, in summarization, we get requests all the time, to summarize datasets that we don’t summarize today, maybe it’s regulatory administrative opinions, for which there aren’t summaries available, or a very quick summarization of new complaint filings in federal or state court. Those are summarization tasks that we believe are very addressable, as are other areas where we own data again, or law 360 data collection, for example, we’re looking at ways that we can provide shorter summaries of that information and to be able to help firms share that current awareness information, if you will, with a wider set of individuals within their organizations.
Greg Lambert 23:09
Now, Jeff, I wanted to follow up on you were talking earlier about being able to, you know, have that conversation with the with the interface that you don’t have to start over again. Now, one of the things that I’m hearing from some products that are being tested right now, at least a couple of them is that they don’t yet have that ability that you can put in, you know, a natural language search. And it does a really good result. But if you want to tweak it, you got to start over again. They haven’t gotten to that conversational part yet, when you roll this out, and people start playing with it. I know, some are playing with it now. Is that going to be something that’s available right off the bat? Or is that something you’ll have to develop over time?
Jeff Pfeifer 23:59
That’s something that we are going to make available right away? We think that the Ask anything question along with a follow up is really what’s uniquely different about interactions through Large Language Models. And for us, that’s, that’s a core area of focus. Our clients, I think, will tell us how well we’re doing at that iterative search experience. And, you know, I would encourage firms to also realize this technology is developing literally real time. So those improvements will continue to improve as well. The improvement to the performance will increase over time. So, you know, I think we’re all going to be on a bit of an experimentation journey here. And that what we introduce over the coming months will look very different than what is available next year, but it is the start of that process. And I think it marks a really important step forward in the way lawyers are going to interact with big data sets. It goes available from LexisNexis. Yeah,
Greg Lambert 25:02
yeah, I know, as a librarian who has worked with natural language searching for 20 years, it’s never really worked. Not as well as we had hoped. And so I’m hoping at a minimum, that this will finally enable enable us to have those natural language searches. Yeah, this truly natural language.
Jeff Pfeifer 25:25
So, yeah, I understand that. And if we think about the evolution of the technology, the world of natural language processing was built on, essentially vector analysis that allowed, it allowed technology to understand relationships between words. And the first language models, probably the best known example is Bert from Google, they change that approach and paradigm, and it looked at essentially statistical modeling for relationship between words. And it increased substantially the number of reference points that were available. So that was a huge step in around 2018. The what’s so revolutionary now is that the new models multiply that impact by millions. So if you think about this, again, as a mathematical problem, it is the idea that the earlier versions had smaller statistical and numerical reference points. And now we have models that have reference points in at least the multibillion range and some now approaching the trillion range. And so the ability to create really accurate semantic parsing of what the user is asking creates the contextual ability to retain that conversation thread, that’s one benefit. And then in the model answer, if the model can be exposed to high quality content like it sees from LexisNexis, it can then consume that information and provide an answer back to the user, again, in a in a way that yields higher quality results. And that is a big, big technology breakthrough, it is one that everyone assumed was possible years ago, but we couldn’t actually have it work in a way that was performant in an application like Lexis plots, right. And now we’ve solved that problem with the, with the coordination with these large cloud providers, and these amazing development efforts taking place by Large Language Models, providers, and they create new opportunities for users of a service like Lexis+ AI.
Marlene Gebauer 27:32
So I’d like to shift the focus a little bit to the commercial preview program, you’re collaborating with leading amlaw 50 firms as part of this program? How are you selecting and working with these firms? What kind of feedback and advice are you getting from them? And you know, what do you want kind of as the ultimate result of the program?
Jeff Pfeifer 27:54
Well, we’re really starting that process now. And the point I would emphasize, and we have to every participating firm, is that this is an innovation exercise to some degree. And so it’s important that firms both have an interest in that type of innovation, and have the staff resources available to commit to that experimentation work. And also, I would say, as a profile, have a tolerance for areas that will not perform exactly as hoped. But we’re looking for that feedback so that we can make adjustments to our development plan, and to improve our own pace of development to address things that they see as higher priority use cases than perhaps we prioritize so far in our development. So in summary, I would say you know, we’re looking for firms that have a genuine interest in experimenting in this Large Language Models space. They have staff resources that are capable of doing that work, and they help us identify the use cases that are really big and important, and related, probably to broader strategy questions the firm is grappling with in their own business development. We’ve also developed a parallel program that we call our Lexis+ AI insider program. And that is open to any firm that’s interested. What we found was, in our research, many firms are learning. They’re on the very early stages of understanding the space and trying to understand the implications for their business. So in that program, we’ve set up a series of webinars, a series of white papers in the ability to learn about this product development over time. And we created a sign up mechanism that’s been active now for at the time of this recording only a little less than a week. And we have several 1000 law firms that have already signed up for that program. To me that suggests there’s tremendous interest and a lot of law firms are thinking about what are the implications of that development for my business and What should I be thinking about on the technology front, that maybe I’m not thinking about today. And again, I’ll just share one high level conclusion that I’ve drawn from talking to a lot of firms around the world. I see firms in general fitting into one of three categories right now related to generative AI technology. They’re either very excited about it, and they have their own experimentation work going on in house already sandbox activities, protected activities, following all of the confidentiality rules that you mentioned earlier, Greg, on the other end of that spectrum, firms that are very concerned. And it could be for regulatory reasons, it could be that they see tremendous risk in the technology, and are taking a very cautious approach. And then a large number of firms in the middle that either don’t know or haven’t explored at all, what the implication is for their business in their organization. And so through this process, again, that insider program is designed significantly to deliver education and awareness for those that might fit in the middle, and that are starting to really explore and understand what the implications of this technology are for their organization.
Greg Lambert 31:18
I imagine you’re probably going to split that into categories, if you get that many that have signed up already. Because you’re going to have people that have security issues that they want the the insider program to address, you’re going to have some that wants, you know, search capabilities and use case, you’re going to have some that will want to know how they can develop things internally. So do you see that insider program as having almost these different categories that people might be interested in taking advantage of?
Jeff Pfeifer 31:49
Participation in that program is voluntary and self selective. So we’re going to provide a wide range of topics from security, to learning about Large Language Models in general, and how they work and how they might be beneficial in the law. So we believe that many participants in that side of the program will be again thinking about what they want to learn what they hope to explore, and then self select the components that are right for that individual and or their firm. On the other side, those firms in our commercial preview, are going to have hands on access, they’re going to be working directly with us to inform the choices that we make about further development and model iteration. And those are those are really key distinctions between the program. One is an active pre development program, if you will. And the other is a learning awareness program that we hope helps contribute back to the market information that helps them understand the potential impact of generative AI technology.
Greg Lambert 32:51
So Jeff, in in the press release, there was a statement that you made where you talked about how LexisNexis follows the relics responsible AI principles to guide how to use AI across its solutions. I know we’ve talked about this before, and we’ve had other people from Lexis on talking about the AI principles or just the principles for human rights and preventing bias. What are these principles on the the AI? And how do they help you prevent things like bias and even protect human rights.
Jeff Pfeifer 33:30
So our commitment in this space is, is well documented. For those that might be interested, again, our parent company is relax. It’s a large information provider in multiple areas science and healthcare, for example, law, and others. And we developed and published a set of artificial intelligence principles that guide the development choices we make. They’re available on our parent company’s website, under our corporate responsibility section of the website, if you want to take a look yourself, but they have five key elements. And I’ll just say them really quickly that, number one, we understand the real world implications of the AI development that we’re doing, so that we understand and think actively about what will be the impact from the work we do on an individual, that we take action to prevent bias introduction. And we do that in a number of ways from the content sources that we select, to statistical modeling that we look at in the performance of the model itself, that we can explain to customers how our solutions work, that there’s a transparency in the way the AI is implemented. So it’s not in a black box, if you will, fourthly, that we create accountability in our development by always having human oversight in the model development in the AI development and the implementation. And in the legal division. We do Do that by always having a set of legally trained professionals that are in the loop in the choices that we make about the model construction and deployment. And that we champion individual privacy and data governance in every AI implementation that we drive. So as we think about the markets that we serve around the world and individual privacy requirements in Europe versus the US, for example, that we’re thinking about what the requirements are to meet and comply with privacy requirements, and individual control requirements that exist under regulatory schemes that are different all around the world. So those are the five principles that that drive our use of AI and are thinking about how it is most effectively deployed. And this, again, this space is evolving so quickly, that we’re testing some of those and thinking about whether there are new requirements that we need to think about in the application of the technology. But we drive also in the organization, an independent group that is focused on providing guidance to our product teams and our technology teams, so that they are keeping these issues first and foremost, in the development choices that they make.
Greg Lambert 36:17
How do you envision these types of tools, these generative AI tools, you know, transforming the legal profession and even advancing the rule of law, and that not necessarily just local in the US but across the globe?
Jeff Pfeifer 36:32
Well, my hope is that the main benefit here is that we increase access and capacity in wheat or access to legal services. We’ve spoken a lot on the on the discussion today about bigger organizations and what they’re doing to support their clients. In other other market segments, like small law firms, for example, the single biggest issue today is whether legal work can be performed in a profitable way. And that often drives organizations to either take legal work or reject legal work based on whether or not it can be profitably delivered. And in a rule of law sense. We think that these technologies can be deployed in a way that allow lawyers to both take on work that might not have been profitably deliverable in the past, and were increased their capacity to a level that they can support more clients in certain types of legal disputes. So my hope is that we overall create a greater capacity for legal services, and that around the world, that means more people can access legal services through legal professionals, we do not intend to drive our development directly to the consumer, our development is focused on enabling legal professionals to provide more legal services support for clients that have an interest in procuring their services. And I think that is likely to be the biggest benefit related to the rule of law. But in the profession itself, I mean, the clients that we’ve talked to so far, the hours of savings in drafting, specifically, the thinking about how long it takes judges to simply create a client email that it’s an advisory on a new regulatory development, they see as something that could go from a 45 minute task to a 10 minute task. And that I think, is really exciting. It would allow them to then move on to the next client challenge or a follow up question or some other support activity that today is hampered by a simple availability of hours.
Greg Lambert 38:48
We’ve talked a lot about what the private law firms what the in house counsel at corporations can be doing with this. What do you see your relationship? And how do you advance things like access to legal services through the public through the courts and other public organizations? Do you have a plan for how you’re going to interact with these agencies and courts?
Jeff Pfeifer 39:17
Definitely, you know, I think that the benefit could be bigger with some of these public institutions. I look at, for example, the the LA county courts have done a tremendous amount of work to provide additional resources for pro se litigants. And it seems quite plausible to me, that these technologies could be deployed through organizations like county courts around the country to improve the quality of interaction experienced by a pro se litigant and the courts. It’s beneficial to the courts, because right now they have a problem of processing those legal cases because certain Paul All Season requirements aren’t followed. And at the same time, it also creates the opportunity for the pro se litigant to better prepare for their interaction with the court. So while that isn’t our immediate focus, I see that as something that’s well within the range of possibility, we’ve we’ve had discussions with a number of individuals related to the federal government’s focus on driving the use and deployment of generative AI technologies, in especially administrative functions within the government. And I see many opportunities to improve and streamline those interaction experiences. So yes, while our topic today in our discussion is largely focused on the benefit to law firms, and in house counsel, I absolutely see the same kind of benefits applying to the courts and to those supporting individuals that are interacting with the courts in a in a pro se fashion.
Marlene Gebauer 40:58
So Jeff, we’ve talked a lot about how generative AI can potentially impact the legal profession. This is the point in the podcast where we ask the crystal ball question. And I think I’d like to frame this by saying, you know, there’s obviously a lot that we could talk about here in terms of the future and how it’s going to impact the legal profession. But, you know, if you had to basically give a nugget, you know, like, if there’s sort of a takeaway from this entire podcast that you would want to share with people in terms of how you know, generative AI tools, like Lexis+ AI, are going to impact the profession over the next two to five years. What would you tell them? What’s that takeaway?
Jeff Pfeifer 41:42
I think the takeaway would be twofold. One, I see tremendous opportunity to improve efficiency in the profession that will generally drive things like improvements to capacity and ability to support client work. So that should benefit the profession. And a lot of people are speculating, does that mean fewer lawyer roles, I don’t see that. In fact, 50 years ago, today, this is the 50th anniversary of LexisNexis. This year, people were predicting that the introduction of electronic research would eliminate lawyer jobs. That’s not happened. In fact, there are infinitely more lawyers today than there were 50 years ago, we’ve changed what we can do, what kind of legal services we can provide. So to me, I see a tremendous benefit on the business side. On the other side, a word of caution. And that is responsible application of Large Language Models is something that everyone experimenting with this technology needs to think about. There are already well documented examples of either copyright infringement that has been asserted, there have been individuals that have uploaded trade secret information that then appeared in answers for other individuals, and implementation of Large Language Models. And artificial intelligence, broadly, requires responsible deployment and monitoring of the technology. So I would encourage individuals to look closely at the vendors that they’re selecting to do this work, to work within their organizations to make sure their teams understand the risks associated with the deployment of the technology. It is all infinitely manageable. But you have to have the right resources, the right skill sets, and the right focus on the bigger picture questions as you experiment and deploy these technical capabilities.
Greg Lambert 43:43
Well, Jeff Pfeifer Chief Product Officer, Canada, UK, Ireland and USA, from LexisNexis, thank you very much for coming in and talking to us about both the Lexis+ AI commercial preview and the AI insider program. And let’s try not to make it another four and a half years before we get you back on the show next time. Thanks, Jeff.
Jeff Pfeifer 44:04
Thank you, Greg. And Marlene, it’s been great. And I definitely will do it sooner than four years the next time around.
Marlene Gebauer 44:11
Thank you, Jeff. And of course, thanks to all of you, our listeners for taking the time to listen to The Geek in Review podcast. If you enjoy the show, share it with a colleague. We’d love to hear from you. So reach out to us on social media. I can be found at @gebauerm on Twitter,
Greg Lambert 44:25
And I can be reached @glambert on Twitter. Jeff, if someone wanted to find out more, where’s the best place for people to find you online?
Jeff Pfeifer 44:33
Twitter as well. Though, you can find me at @JeffPfeifer on Twitter or also on LinkedIn. You can search for my name and I post regular updates about some of the things that we’re working on in this space.
Marlene Gebauer 44:45
And listeners. You can also leave us a voicemail on our geek and review Hotline at 713-487-7821. And as always, the music you hear is from Jerry David DeCicca Thank you, Jerry.
Greg Lambert 44:56
Thanks, Jerry. All right, Marlene, I will talk to you later.
Marlene Gebauer 44:58
Okay, bye bye.