This week we bring in Christian Lang, the CEO and founder of LEGA, a company that provides a secure platform for law firms and legal departments to safely implement and govern the use of large language models (LLMs) like Open AI’s GPT-4, Google’s Bard, and Anthropic’s Claude. Christian talks with us about why he started LEGA, the value LEGA provides to law firms and legal departments, the challenges around security, confidentiality, and other issues as LLMs become more widely used, and how LEGA helps solve those problems.

Christian started LEGA after gaining experience working with law firms through his previous company, Reynen Court. He saw an opportunity to give law firms a way to quickly implement and test LLMs while maintaining control and governance over data and compliance. LEGA provides a sandbox environment for law firms to explore different LLMs and AI tools to find use cases. The platform handles user management, policy enforcement, and auditing to give firms visibility into how the technologies are being used.

Christian believes law firms want to use technologies like LLMs but struggle with how to do so securely and in a compliant way. LEGA allows them to get started right away without a huge investment in time or money. The platform is also flexible enough to work with any model a firm wants to use. As law firms get comfortable, LEGA will allow them to scale successful use cases across the organization.

On the challenges law firms face, Christian points to Shadow IT as people will find ways to use the technologies with or without the firm’s permission. Firms need to provide good options to users or risk losing control and oversight. He also discusses the difficulty in training new lawyers as LLMs make some tasks too easy, the coming market efficiencies in legal services, and the strategic curation of knowledge that will still require human judgment.

Some potential use cases for law firms include live chatbots, document summarization, contract review, legal research, and market intelligence gathering. As models allow for more tailored data inputs, the use cases will expand further. Overall, Christian is excited for how LLMs and AI can transform the legal industry but emphasizes that strong governance and oversight are key to implementing them successfully.–H1w

Listen on mobile platforms:  Apple Podcasts |  Spotify


Contact Us:

Twitter: ⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠, or ⁠⁠⁠⁠@glambert⁠⁠⁠⁠
Voicemail: 713-487-7821
Music: ⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠



Marlene Gebauer 0:07
Welcome to The Geek in Review. The podcast focused on innovative and creative ideas in the legal industry. I’m Marlene Gebauer.

Greg Lambert 0:14
And I’m Greg Lambert. So Marlene, there’s been just a ton of things going on in the legal world involving using generative AI. You know, the tools like ChatGPT, Bard, Bing Chat, all the others that are out there. And you know, of course, the big news that everyone’s heard about over the Memorial Day weekend was the the lawyer Mr. Schwartz in New York, who apparently just copied and pasted his brief that he filed with the court straight from ChatGPT. Now, I just found out that this guy was a 30 year lawyer. He wasn’t like a newbie on this. So to me, he broke rule number one of the internet, which is don’t do anything stupid.

Marlene Gebauer 0:59
Yeah, it’s like Gen AI Gone Wild. It’s just like, really not. You know, it’s just it’s like, it’s like, just common sense that you would of course, check your work before you submit to the court.

Greg Lambert 1:11
But don’t check your work with ChatGPT is if it will double down on the lie. So the second big news that came out this week is I saw that Northern District of Texas, Judge Brantley Starr created a new rule for his court for anyone appearing before him that they must certify that if they’ve used any generative AI tools, that they have to sign it and certify that a human has reviewed the work before it was filed with the court. Any human Yes, it just has to be a human. And it actually says human in the court rule. Yeah, I have to admit, at first, this kind of pissed me off when I saw it, because I’m like, Oh, great. And I think can’t remember who it was on Twitter, it was like, well, never underestimate the ability for a judge to make something even more complicated than it needs to be. But I did have someone at my firm kind of calm me down a little bit. And just basically pointed out that, you know, this is a rule 11 thing. This is basically just reminding attorneys, that they are the ones responsible for anything that they turn in to the court. And it basically keeps the lawyers from breaking rule number one of the internet.

Marlene Gebauer 2:40
Yeah, like Mr. Mr. Schwartz is our is why we can’t have nice things.

Greg Lambert 2:43
This is why we can’t have nice things in legal. Oh, there’s so many other reasons for that as well. I am sure so. Well, you know, and the thing is, is this is really kind of put the legal world kind of on the edge or on edge at the moment because of this news. And you know, in teams are kind of breaking into into camps of your team AI versus teams of no AI. So you know, I guess Are you are you Team, sparkly vampire? Are you Team werewolf? I think that’s how we need to, to put the in but the you know, the two big reasons are the, you know, hallucination issues. And and the privacy issues, which, you know, makes total sense on that.. And I think, you know, I think the legal industry can can walk and chew gum at the same time. So I don’t think this is a you know,

Marlene Gebauer 3:41
I don’t think it’s an either or. I think that would be a mistake to think about it that way. Yeah. And this is this is something that will be used, we just have to figure out how it will be used.

Greg Lambert 3:56
Just like any other tool there exactly. But it will continue on. But at least our guests this week is you know, we brought him in and we can talk about both of these issues and a lot more.

Marlene Gebauer 4:09
Yes. So we’d like to welcome back to the show Christian Lang. Many of us know Christian through his great work through Reynen Court through the New York legal tech meetup and as an advisor with LexFusion. But now he’s taking on a new opportunity by starting Lega, a platform that provides a safe space for Large Language Models exploration. So Christian, it is great to have you back on The Geek in Review, and you’re gonna I know, you’re gonna tell us what LEGA stands for.

Christian Lang 4:33
No, it’s great. It’s great to be back. It was originally an acronym. It was just a generic placeholder for a name a Large Language Models enterprise governance application. And then when I saw the acronym I was like, well, that’s a that’s a nice name. There’s no way there’s going to be IP is going to be clear on that. And I did a check and didn’t know much about IP law. I’m not sure I didn’t check right but I was like, oh, that’s actually could be a great name. And it’s close to legal and it’s close to lego. So we just ran with it.

Marlene Gebauer 4:57
That sounds great. That’s immediate.

What I thought when I saw was LEGO

Greg Lambert 5:00
If any industry needs a safe space, it’s legal. So thank you.

Marlene Gebauer 5:05
So a lot has changed since we last had you on the show back in 2019. Can you talk to us about your transition from the really cutting edge work that you were doing a Reynen Court? And how, after all that wound down your idea for starting LEGA?

Christian Lang 5:19
Sure. Well, it’s great to be back with you. Thanks for having me. Let’s see the Reynen Court transition. highly relevant. I mean, you know, Reynen Court fundamentally, was all about trying to help legal enterprises get access to the best, like the latest and greatest tools, while retaining control over their data, which is a challenge in a cloud computing context. You know, the SAS paradigm has you duplicate your data and send it out outside the security perimeter or somewhere else. But there’s new tools in the market that let you bring the app to the data in it. So spent years trying to focus on how we solve that challenge for legal and really was leaning into standardization, as a as a mechanism by which you can provide speed and nimbleness so like firms and legal enterprises can like navigate a shifting landscape. And I think we’re seeing that change, accelerate. And it was really kind of through that lens that this whole opportunity cropped up. Because this new wave of generative AI technologies, I mean, the pace of change is absolutely breathtaking, and almost paralyzing. But I think we have with the right mindset and right tool set, you can start to again, give people the tools, they need to retain some of the control so that they can lean in and try to find the commercial applications and use cases. So it dovetails nicely. I mean, it’s very different, very different business, but that the perspective that we cultivate it Reynen Court, the relationships and the sensitivity that I developed to the way that large law firms in particular, think about enterprise administration of applications and data governance is kind of where all this where all this came from. Yeah.

Greg Lambert 6:49
Does the LEGA model that you’re using does that differ quite a bit from how you were setting up Reynen Court’s? You know, the focus on enabling adoption of cloud based legal applications?

Christian Lang 7:02
Yeah, very much. So I mean, it’s it’s kind of born from a common perspective and kind of built on some of the same learnings. But like, an entirely different thing at you know, as I’m sure we may talk about a little bit, it’s LEGA is focused on a couple of things. But the the tip of the spear is around kind of governance. And it’s about helping to drive the visit visibility and transparency, and kind of operational hygiene, you need to be able to get started and start getting your hands on tools and capture learning from tools and start to scale that learning etc. You Reynen Court, we didn’t always do a great job of articulating it elegantly. So everyone really understood it. But there was a root, incredibly robust technology layer beneath that doing the actual deployment and container orchestration that facilitated that cloud native portable deployment paradigm. In this world in the generative AI world. I’m excited that this may not be the case for long. And we maybe will talk about that in the future. But right now, everyone’s kind of consuming these large hosted models that are hosted by big tech companies. And you can hit those to the consumer applications. But you can also hit them via API. And so a lot of what we’re building in the LEGA platform layer is about facilitating those API connections, but in a very particular way that helps keep the enterprise a little bit more in control of the data, the access and the policies.

Marlene Gebauer 8:21
Right. So following up on that response, I mean, you’ve talked a little bit about the rapid pace of change. You’ve talked about the need for governance, you’ve talked about how firms are looking more at sort of the API options, thinking about all of those points, you know, why do you see the need for a solution that’s aimed at helping law firms safely implement and govern the use of Large Language Models? I mean, what is sort of different about them, as opposed to like the rest of us?

Christian Lang 8:52
This is a you know, I don’t I’m sure no one who’s listening to this sort of podcast needs to be told this. But this is obviously an exceptional moment. It might be unique. I mean, it’s extraordinary, how not only how incredible and kind of mind bending that the technology is, it’s extraordinary how quickly it’s become a part of the zeitgeist, and it’s just there, and everyone understands it. So to answer your question, Marlene, I mean, we’ve got a situation where we, you know, every firm has hundreds if not 1000s of professionals playing with things like ChatGPT in their own personal accounts outside the visibility of the firm doing God knows what you know, maybe putting confidence apply data even if they’re not supposed to do these systems.

Greg Lambert 9:32
They are.

Christian Lang 9:32
You know, this they know. Yeah, and we know that that’s the context right? As cool and exciting as the open AI models are that’s just literally the tip of the iceberg. We’ve already seen now Google Bard come to market and Anthropic is Claude model, and there are dozens, if not scores more today, and we’re about to have hundreds more. So you’ve got everyone playing in the consumer tools. You’ve got a bunch of new tools coming to market and everything’s moving so fast. There’s like this analysis paralysis where It’s kind of hard to get started. And so you put those factors together, it just occurred to me looking through that lens that we were talking about earlier that we just we need, there was a real opportunity to provide a simple, elegant platform layer, we get channel users out of the consumer tools into enterprise accounts, where the firm of the enterprise is kind of controlling the account and the API key access to the underlying model. You give them the tools, they need to kind of configure solutions that they can provision access to those users. And then just through kind of a governance layer, you just do some very simple things, you give them the ability to manage user access to solutions using the AD and SSO the way you do for the entire rest of your tech stack. When you have policies you want to implement, you’re not sending those out, just by email, you know that people are going to not read in the first place or forget, two days later, you’re going to make sure that every time somebody hits a solution, there’s a configurable compliance checkpoint, so they know exactly what they can and can’t do in respect of this solution at this moment in time, because that’s going to change dramatically based on what model is driving the solution and the factor of time, like you might be comfortable tomorrow, something you’re not comfortable with today. And then finally, in most importantly, if you’re channeling the traffic to the API driven solutions of the enterprise that you control, and you control that traffic routing, you can take everything going into a model and everything coming out of a model, and put it into an audit log, so you can see what people are doing. And that’s important on day one for compliance purposes to make sure people are adhering to the policies in Greg, as you were saying, not doing anything stupid that they shouldn’t be doing or not paying attention, right. But I’m actually much more excited about the broader opportunity to be able to see what people need, because I think we’re in one of this phase. Now we just need to get our hands on, we just need to play with it, we need to identify use cases. And when we find good use cases, or we engineer great props to unlock value, we need easy and elegant ways to scale that across the enterprise. So the audit trail aspect of LEGA I think, is the tip of the spear, the foot in the door is the compliance angle, but actually think the commercial opportunity behind it is huge. And so that was kind of the notion of why I thought, you know, law firms kind of needed something like a LEGA. And that’s kind of that was the genesis of how that all this came to be.

Marlene Gebauer 12:07
Is part of the solution that you’re you’re talking about here, you know, in terms of like figuring out use cases and things. Is it also about sort of which of these LLMs are good at what because you know, some of them are better at certain things and others or you know, some firms that are testing it, they’re limiting them in different ways. And so it’s kind of hard to compare in terms of what’s the right thing to use for a particular purpose.

Christian Lang 12:34
I think that’s exactly right. I think there’s some great people out in the market doing really important and interesting things. In respect of your specific models. There’s some companies built around amping up GPT-4 for legal that do great work. But we may want it like, what’s an example, you know, if I’m if I’m trying to get a taxi brief on a client before I go to a meeting, a GPT=4 model that where the training data stopped in September to 2021, that’s not actually going to help me with current information, right, I might need something driven by Bard, I might need something driven by Claude or any number of other models, I think one of the coolest developments that’s coming that we may see is the ability to host significantly smaller models in your own estate. So they can interact with your proprietary datasets in a way that gives you confidence that it’s not be exfiltrated, or going somewhere, it’s not supposed to go right. And I think that’s exactly the point. We need those sandboxes where we can try and test things alongside of one another, see what’s good at what, and kind of play and tinker. And I think that’s in part, what has informed the way that we’re attacking LEGA, like when you want to tinker, you don’t want us to go hire a bunch of developers to build a UI for everything you want to try and test right you need a way to kind of get apps or solutions kind of in hand quickly. And then because that application layer is going to constantly be changing, you don’t want the governance tools in the application layer, because that’s going to be changing, you want that to be constant. So LEGA’s essentially distilling out of the application layer that user management, the policy management, the governance and the audit logging. So that that doesn’t change. So that’s consistent across solutions. And to your point, Marlene, across different models. So when you’re trying and testing different things, it you don’t have to have this balkanized kind of governance landscape and data setup. It’s consistent. And that’s really the gist of what of what we’re trying to do.

Greg Lambert 14:19
He gave a pretty good explanation about how you secure the platform. But I’m curious what kind of what kind of questions are you getting from the IT and security teams because I was I’m going to kind of bifurcate this discussion a little bit into the security piece, which we have, we have a whole group and you know, IT has jobs in the past decade has focused much, much more on security than than it has before. And then on the actual use case side of the argument as well. So sticking with the with the IT and security parts, what kind of questions are they coming to you with? And is there any type of certification that they’re doing or statement of work? Or Whatever, what it was kind of some basic things that they’re asking you at this point?

Christian Lang 15:17
Yeah, it’s a great, it’s a great question. It’s funny, those two constituencies can map onto the two parts of the LEGA infrastructure. And I’d be very curious to see how this plays out over the next 12 months. In terms of what kinds of questions they’re asking, you know, people are really just trying to get their brains around how this technology works, and how data goes and where it lives and how it’s secured. I think a lot of people, particularly in these large enterprises, that that we all know and love are putting a lot of stock in the fact that Microsoft has leaned in so aggressively to incorporating some of the technologies. But you know, the dirty little secret, and it’s not even so much a secret anymore, is when you start asking the hard questions to the Microsoft folks. They don’t even have answers yet there because it’s all happening so quickly, like I’m not, you know, and and it’s a there’s a lot of open questions. And so I, you know, I’ve been lucky to not have conversations, I don’t know, 50 6070 law firms talking about their strategy, how they’re thinking about things, I think the vast majority are still trying to get started kick tires with with information that’s not necessarily confidential, not the certainly not the kind of deeply confidential crown jewel type of information. And so then they’re just trying to figure out, okay, well, who’s hosting, what, where’s it going, et cetera. But I think there’s some really significant security issues that we’re all going to have to delve into. AI models generally are pretty susceptible to adversarial attack because of how they’re set up. And, obviously, when things are moving this fast, that’s essentially like just fundamentally antithetical, like, I think we’re gonna have a pretty tough context. That said, I think it’s really important. And you mentioned the advisory work I do for LexFusion, Casey Flaherty has done a lot of great work and writing about this recently, we can’t get bogged down in the security conversation, we got to be focused on how, where this goes and from a substantive perspective on the types of solutions we can drive. Particularly if we look beyond law firms, and we start thinking about corporates and enterprises, if the lawyers want to be sitting at that table, a part of that conversation driving where this goes, we can’t we can’t be focused on the limitations and the no’s. But it is interesting that everyone, Greg is starting from square one trying to figure this out, figure out what the security means. And a lot of people are putting a lot of faith in proxies, but kind of with their throwing up their hands a bit being like why I hope on Microsoft knows what it’s doing. So I’m not sure I what I’m doing.

Marlene Gebauer 17:38
So I like your focus about like, you know, how do we get to yes, as opposed to all of the No’s. What types of solutions? And I mean, this is a huge question. I know, what types of solutions can can law firms build and configure on the LEGA platform?

Christian Lang 17:53
Yeah, it’s a great question. So I think, you know, the short answer is the sky’s the limit. Let’s let’s talk more specifically about kind of what that is, in its earliest kind of MVP, state, what like it is today, just just the pure thing that we’ve launched in the market and is actually being used today. That’s consuming via API, the big found a hosted foundation models that are out there, you know, the GPT, 3.5, Turbo, GPT-4 , Bard, Claude, etc. Everyone clearly wants and is going to use some sort of live chat bot that’s like a ChatGPT alternative, right? There’s some low hanging fruit that are also very big, abstract things where you can just tighten up instead of, you know, you could always end you could always do this work through a chat window if you knew the right kind of prompt commands or the way to engineer the prompts. But you know, there’s ways to craft solutions. So some of that prompt engineering is done in advance utilities like summarizing documents, I mean, the use case, I’d love to talk about it, when I’m talking to having a conversation about this at a law firm. It’s just a it’s a silly little use case. But it’s powerful. I mean, when I was in practice, when you’re a mid level associate, and you’re trying to start get involved in business development, that the number one thing you do, almost the only thing you do is keep an eye on the news, and see what’s happening that might be relevant to your key clients, you know, with whom you want to deepen relationships and send it to them. I wanted to make sure you saw this or you know, I was thinking of you or just to say front of mind, right? And you can’t just send a busy client, a document or a link, right, you have to send them the key takeaways in the body of the email. And I think everyone understands, if you don’t understand, let me tell you, it’s amazing how that 20-30 minutes is a real challenge when you’re billing 18 hours a day, and you’re going flat out. So like just the ability to have something summarized Well, documents which these Large Language Models are phenomenal at is a great simple, easy use case. There’s some translation use cases, etc. However, there’s a whole different layer of much more specific things that are pretty exciting. And if you have the right tools to twist dials, you can create them very easily. Like here’s an anecdote so we put it out at a launch press release on Wednesday. One of our early adopters Womble Bond Dickinson the chief knowledge and innovation officer Bill Koch was quoted in that very happy to have them on board when we were doing our launch workshop at Womble and I was, with Bill talking him through how the configurations work. Without delving into private information, he had a brainstorm in the moment thinking like, Oh, I see this solution. Can we do this other thing that looks a lot like that. I’m like, I don’t know. Let’s try it. So we literally just like duplicate a solution, we go into the configuration parameters in our admin panel, and start playing with the system prompts. This was an open AI driven solution. So we could tell the model what hat to wear. And we changed the kind of hat wearing instructions of the model to do what Bill was just suggesting. And four minutes later, we had a working solution doing the thing he was he was trying to do. And I think that’s emblematic of where we can go with if we have the right kind of Workbench sandbox type of approach here. There are tons of marketing use cases, we have firms thinking about talking about, we talked earlier about taxi briefs that came out of a, I was having a breakfast with a CIO at a Magic Circle firm A couple of months ago, and we were chatting about the whole concept of Oh, you’re running into a meeting, you need a quick download of information, everything that’s happening, you might ask a paralegal, you might hit your associate, you might throw it down to the library, well, you can literally configure a solution that just will spit it out in the in the data and the information you want, you can just drop in a company name and bam, it’s it’s in your hands and five seconds. And there’s I mean, these are just silly little examples that don’t seem that powerful. And this is just the tip of the iceberg. I mean, we can we’ll really start to be able to lean into really substantive things, particularly as soon as we have access to models where we can control what data goes in and where it lives.

Greg Lambert 21:26
Yeah. And I think one of the things that a lot of people worry about, I think, actually, they should probably flip and worry the in the other direction is that you’re going to see things like this, and you’re going to hear people go, Oh, if they’re not coming to me for that any more than you know what, what use, am I? The problem is, they’re not coming to you in the first place, because no one has the time to do this. And what’s going to happen is, all of a sudden, they’re going to come to you with much more extensive request, because they’re coming with more information up front. And I think, you know, the problem with that is, the people that never use you are now all of a sudden going to use you. So you’re going to be overwhelmed with this. And so, you know, it’s one of the things that I don’t think a lot of people are thinking about is how do we react to the changes that are going to be coming to us? And so, you know, especially if you can create something in four minutes, and then all of a sudden you got, you know, going going out for probably months or years from there. Again, I think it’s one of the things that is not on people’s radar at the moment, but I think it should be.

Christian Lang 22:44
Yeah, I think it’s a great point, Greg, I’m at look, let’s let’s be real there. Of course there are practice areas, if you’re thinking about firms, or maybe there’s roles within firms where the party in question is purely kind of rent seeking, they’re not actually adding any value. They’re just kind of coasting like, of course those things exist. But I think people dramatically underestimate how much work and more valuable work there is to be done. That’s just not getting done today. And to your point. I mean, once once you start having these enabling tools, it helps guide you to where you need to be to do that stuff. And the market for legal services is dramatically underserved. You know, I pick your number out of a hat, but only 8% of the legal services market is actually served today for all kinds of reasons, there’s plenty of plenty of work to to fill up any any minutes that might be created by kind of expediting processes that shouldn’t be done in a kind of a manual way.

Marlene Gebauer 23:36
So Christian, you partnered with Betty Blocks for legacy technical infrastructure? Why did you partner with them? And how does that partnership benefit customers?

Christian Lang 23:47
Yes, this is another relationship and concepts, both conceptually but also very tangibly that came out of the Reynen Court experience, because they were close partner of ours at Reynen Court, we used to work closely with them huge fan of that he blocks I think they’re, you know, the best or one of the two bests, you know, application development platforms that plays in the legal space.

Greg Lambert 24:07
I’ve always loved their name.

Christian Lang 24:08
Yeah, they’re great. I mean, they were right in the Goldilocks zone. They’re very few of them that have user friendliness, but huge power behind it. So you can do almost anything, usually you kind of have to pick or choose between those two things. And then the lesson from reigning court that generated that relationship. I’m just deeply sensitive to how the great to your point about the fact that the IT teams are almost exclusively focused on security for the last 510 years. I mean, I just knew what was going to be kind of acceptable and not acceptable from an architecture at a data security perspective at these at these top firms. Not necessarily these two, maybe we’ll talk about this, these top firms are necessarily legis best customers. I just that’s where we’re getting started. And I just knew if you show up with a business that was started three months ago and say, Hey, trust me, I did these things. I don’t have any certain certifications yet. I don’t have I don’t have a big team of QA engineers doing these things yet, but just you know, trust me that that’s a nonstarter. So, Betty Blocks not only is a phenomenal platform, they are an incredibly mature operation that serves some of the most security conscious customers in the world, not only law firms, but beyond law firms in the intelligence sector and other things. So they just have, they have all the things, they check all the boxes that you need to give somebody confidence about where the data is going, how, where it’s living, who has access to it, etc. And so I knew that if we wanted to start quickly, and I knew that starting quickly was imperative here, given the context, we just need to have a deployment option that firms could be comfortable with. So Betty Blocks is the partner that gives us that right out of the gate on on day one.

Greg Lambert 25:39
Well, Christian, you had mentioned in passing kind of Betty Blocks, that they’re not just in the legal market, but they’ve got a much wider customer base. And so looking at, you know, the potential for a product like LEGA, do you see it as being used in enterprises outside of law firms? And I think I kind of know this, because it’s kind of what you know. But, you know, why are you focused in on the legal industry at the moment?

Christian Lang 26:07
Yeah, it’s a great questions. So, point one, there’s nothing inherently law firm specific about LEGA the way LEGA is set up today. So that’s certainly true. That said, the challenges that we’re talking about, and that we’re trying to address are particularly acute in the legal industry right now. That that that landscape is incredibly challenging for firms to, I’m very lucky to have a deep set of relationships in that space, and know how to talk that language and know what these folks need in 3d. And this is kind of the wildcard that a lot of people don’t think about, you know, in this fast moving landscape where we’re grappling with all these significant challenges. It’s the firms that are going to be advising all the other corporates about what they need to do and how this needs to work. So if we can show up and we can, we can solve the problem for the firm’s and help them clarify their own understanding of this landscape. I think it redounds to our benefit, and things can kind of roll downhill very quickly. I think certainly, we are open to exploring a broader, more horizontal business. In fact, I’ve already had a few conversations on that front, and people seem quite keen. We’re definitely at kind of what I think of as, as a very true MVP state. I mean, I was committed, in part because of the Reynen Court experience where we tried to do so much so quickly. And in part that was that was one of the challenges. I’m really committed to taking the absolute bare minimum thing that is actually valuable today to people to market, an MVP. And then and then work with closely with customers to figure out the next thing and the next thing. So that MVP exercise right now, you know, we’re attacking law firms, but we’ll see where they go. And then circling back to a point that both of you Marlene, you asked this question, then Greg, you mentioned that when we were talking about the a minute ago, the security versus the use case dynamic, there might actually be an interesting role for a player lega or someone else to play in a verticalized solution for the for the legal space on the solution front, because Marlene, as you were saying, we may discover that certain models are particularly good at enabling certain use cases. And I’m excited we experienced this in the Reynen Court context, a lot of firms are moving beyond the idea that technology is a key differentiator in the market, they really get oh, it’s just an enabler, the differentiators the expertise and the relationships etc. And so I think there is an appetite in the market for firms and other players to kind of come together and share learnings particularly in this context where the entire industry might be a little bit under threat as a result of these technologies. I would love for LEGA to be a point of coalescence and community building around which kind of key thoughtful people can come together and share notes about about the best use cases best way to enable this use cases best way to configure the solutions. So I’m not opposed the idea of having a highly verticalized business, but there’s nothing inherently driving us there at this moment. And I wouldn’t be at all surprised if we attack more horizontally sometime later this year.

Marlene Gebauer 28:53
I’m gonna go back to the sort of the security and, you know, protecting client data area that we were hitting on a little bit before. Clearly, I mean, obviously, it has to be extremely important to firms, you know, to to protect that. You know, you’re talking about some of the sandbox spaces that you’re offering. Can you delve into it a little bit more about like, what is LEGAs role in helping law firms and actually legal departments gain experience with LLMs and in a safe space, but also making the firms and law departments feel safe?

Christian Lang 29:32
Yeah, absolutely. There are a few pieces to that in just getting people out of the consumer accounts and into the enterprise accounts is a big first step. But that’s not the LEGA thing. That’s just what the firms need to do kind of thing. What we’re trying to do is something in the gen one and the MVP, something very, very simple. I’ve actually been quite impressed with the maturity that I think a lot of the firms are exhibiting and how they craft their policies. I mean, you know, you’re used to them just saying no, this is not a context where we can just say no, and we can talk About the shadow IT problem or it for this context in a minute, if you guys are interested, yes, she’s unprecedented size of that problem. You know, we got to compare apples to apples, we got to look at what is the alternative people going to use this stuff. And so what we’re trying to do is when a firm articulates comes to an institutional view, and maybe we can help them come to the view, but when they come to an institutional view about what they can and can’t do in these tools in respect of whatever models they want to consume, and we facilitate, obviously, them consuming the models they want, but they come up with policies, and a lot of the early emphasis is just let’s help you actually operationalize those policies. So people know what they are in context and actually can adhere to them and be reminded to them at about them in the right time. This is actually a really, it’s a very small thing, it’s a very big part of the puzzle, it’s yes, you can be craft these wonderful policies about what people can and can’t do. But if they forget it, or they don’t know it, or they don’t know what the right time, or they don’t know which solution applies to it’s worthless. So one of the very simple things we do is just help give the toolset to make sure that the user knows that and we try to keep their attention all along the way, when that regime is changing. And then the second part of the one two punch is then have the tool set to be able to see whether that’s working. And if it’s not working, be able to intervene and change behavior. So those two things together, which is the compliance checkpointing and the audit logging, those two things are just two very simple ways that we try to help standardize and operationalize that compliance exercise. And then layer on top of it, that flexibility about consuming different models in the same way in the same platform. So users have one place to go in all the data from all the solutions has come into one place, I think that actually solves a pretty big challenge. On top of that foundation, you can actually do some really interesting things that might be a lot more proactive, that might be a lot more algorithmic, but it’s step one of legged MVP is just as dead simple as simple can be, but we hope it’s actually quite powerful. And then the reception has been pretty.

Greg Lambert 31:53
I’m curious, you had mentioned in passing the the shadow IT problem, you might elaborate, I think I know where this is going, because I’m probably part of the problem as well. But when you talk about the shadow IT issue what what’s going on there?

Christian Lang 32:09
Yeah, you know, it’s funny, I throw that word around a lot, assuming people know it. And I was talking with somebody yesterday, who is one of the most sophisticated people in the market, I know. And they were like, Oh, that’s funny. I don’t know that label. You know, the whole concept of shadow it is you have a sanctioned technology stack in an enterprise where the tech and security folks have vetted it, the enterprise has licensed it, and that’s what you’re supposed to use, and everyone knows what’s supposed to happen, right. One of the big challenges as technologists, for enterprise is if you don’t give the right tools to your users, they still got to do the business. So they’re just gonna they’re gonna find what they need to do the business. I personally have experienced this, I hope I’m not gonna get in trouble for telling the story. But like I when I was in practice that Davis Polk, you know, the firm implemented, you know, Mobile Iron, the lockdown of the of the mobile phones, and known the only the safe apps, et cetera. And you know, great system, Mobile Iron is great. But I was working out of the London office and in the London office, that small little office, we were on planes, three days a week, I was doing the, you know, the IPO of ABN AMRO, I fly to Amsterdam to like twice a week. The the way that it locks the phone down doesn’t let you essentially on mobile compare two documents to one another. And if anyone’s ever practice corporate law, there’s nothing you need to do. Compare two documents. So like you literally just, you wanted to adhere to the policies, but you literally couldn’t. So you got to come up with a creative way to get a document out of the protected thing that the firm has set up and put it on your personal device. So you can look at them side by side or something right and that that’s a shadow, you’re having to use this whole separate IT stack that that is unsanctioned, your shadow IT stack, that’s always a challenge. And our job as technologists and firms is to try to thread the balance of giving people enough flexibility, so they don’t reach for their own tools, but keep the stack standard enough that you can manage it securely on behalf of the enterprise. This context, the way that open AI has, absolutely with ChatGPT just blown the doors off this entire new product category puts something very elegant, simple in people’s hands that is blowing their minds and that they see can be used so powerfully. You can’t unring that bell, stem that tide, dam that river, it’s just happening, right? So like people are going to use this stuff. So we have to find a way to give them something as good or close to as good that they can play with and if we’re going to restrict it, we have to be able to articulate to them effectively, why we’re restricting it, why that’s important and how we’re working as fast as we can to lift those restrictions. If you don’t do that if you just try to say no people are going to do it anyway and you’re going to be in a far less secure position than if you had given them an option a channel just think about damming a river like real dams don’t just try to stop water they would break they try to channel the water so that you can value before you use it and they you have to have those those valves to release pressure etc to make it all work and so that that’s the context. I think we’re in here

Marlene Gebauer 35:00
What have you seen so far in terms of the biggest opportunities for law firms to leverage the Large Language Models? And how has lega helped and realizing those opportunities to date?

Christian Lang 35:13
Yeah, I think it’s the it’s actually very basic, it’s just giving them an option to move quickly and get their hands on them. So right now, I think a lot of firms are challenged, because they don’t quite know what to do. And, as we alluded to earlier, there’s some great options in the marketplace where some fantastic companies are making really powerful but very opinionated systems available for certain use cases, but at very high cost. And it’s a challenge, right? And so firms sometimes don’t even know how to how to kind of

Marlene Gebauer 35:43
How do you even negotiate it?

Christian Lang 35:45
Yeah, well, the you got to have some way to get started. And so basically, what we’re trying to do is really lower the barriers to entry, the cost of entry, let them just kind of get started and get their hands on stuff, because I don’t think anyone knows yet. Where this is going to go exactly, and what the use cases really need to be, we just need to start, we need to start playing in order to play you got to have access, and then you got to have rough enough enterprise guardrails that you can turn people loose in that in that safer space in the middle. And so that’s kind of what we’re doing in there. And I’ve been very excited that the response to that has been overwhelming. I mean, we literally went from the idea of LEGA to our first AmLaw 100 deployment in 90 days. And I like I’ve never, it’s just unheard of. I mean, I’ve just never heard or seen of anything moving that fast. But we continue to have conversations where everyone’s like, Oh, this is exactly the problem I need to solve like yesterday, like, right now that it’s just Marlene, helping them, helping them find a way to get started without going and hiring a bunch of people without having to invest millions of dollars, where they can, can have a little bit more control over access over policies or content. That’s the win for phase one. But that phase one may only be six months, 12 months, we’re really excited about the phases that come after. But right now, I think the biggest way that we can add value in the way that we’re seeing it play out is giving people a pathway to actually get on the journey with some confidence,

Do you actually sort of offer any type of guidance? I mean, I love this idea of kind of the freeform, you know, you’re sort of experimenting, but perhaps some may need a little more or a little more something surrounding that a little bit more in terms of, here’s what you could do, or here’s how you could do it. Is that part of the offerings as well?

It’s a great question. Of course, it is in a limited extent to, you know, in the following way, you know, when we deliver the platform, we’re going to have some solutions in there. And we’re gonna help you just configure them as part of the implementation just to help get started. But along the lines of what we were talking about earlier about wanting potentially this company to be a place where people could come together and share learnings I actually think leaning in to something like that, which we’ve not yet had the time or opportunity to do, I think could be a really cool and interesting frontier. And and and it doesn’t necessarily need to be a LEGA thing like we could we could just be a participant in a group, we don’t need to be leading the group, it doesn’t need to be a commercial effort. But I do think putting people in a position to explore, but not just explore, and part of that platform is set up for this one, you may have a breakthrough, when you find something while you’re exploring, you need to capture the value. And then you need to scale that and you need to build on top of it. Right? That’s one of the problems with not only is there a compliance challenge with everyone using something like a ChatGPT in a consumer app on their own phones. From a compliance perspective. The other bigger problem for me is that when they do something awesome, and really interesting and cool and craft a cool prompt like that, the value of that 99.9% is just going to evaporate, because no one’s going to see it, no one’s going to know it, no one’s gonna be able to scale that same approach to the other 1500 people at the firm that need to do similar things. So I do think there’s a real powerful opportunity around that. That said, we haven’t we haven’t made a huge amount of progress yet, because we’ve been getting to first base, but I’m excited about it.

Greg Lambert 39:01
I’m curious as a as a user of the product, let’s say I’m an associate or junior partner, and I’m engaging with LEGA. This is when I wish we were a video podcast, or you know, YouTube, can you describe kind of the experience that the end user has engaging in the product, what it is now? And then kind of how you envision that changing or adjusting over time?

Christian Lang 39:32
Sure thing. So you know, we talked a lot about how the kind of tip of the spear here is the governance aspect of this, but a lot of that’s kind of under the covers, and it’s the admins in the innovation teams, the IT folks,

Greg Lambert 39:43
That’s not where the magic is. The magic is in the frontend.

Christian Lang 39:44
That’s where the magic is. Yeah, so if you think about it from an end user perspective, what they’re seeing when they log in to lega is they’re seeing solutions. I mean, they’re tiles representing you know, okay, I can chat I can summarize I can generate this I can do that. And these are the things that when the innovation team is tinkering, and defining and configuring solutions in a few clicks on the back end, they’re provisioning access to the user base. And the user is showing up and has a little, you know, little app store like experience of solutions that they can hit, and use and play with. And so that’s what they’re doing. And in that, you may want to incorporate those users into the process of evaluating whether the different models provide better solutions, Marlene, to the thing you were talking about earlier, or maybe you’re making that decision for them. And they don’t even know that the solution they’re using to summarize documents is being driven by GPT-4, but the solution they’re using to generate a taxi beef is being driven by Bard because it has access to the internet and calm, you know, current data. And I think the let’s fast forward for a little bit in Greg, this, this starts to answer your question about the next steps. And we’ll talk about that more explicitly in a second. I think what’s really exciting here is what could happen in the next 3,6, 9 months in this landscape, the smart technical people that I have in my orbit that I kind of go to for advice on these questions, they think there’s a massive fork in the road ahead of us in probably just a couple of months out ahead of us about whether we’re going to end up in a world where we take kind of the left fork in all of the significant generative AI solutions are driven by the same handful of Large Language Models that are hosted by big tech companies, you know, the open AI is of the world, like, that’s what’s going to happen, it’s going to be kind of oligopolistic. Or if we’re going to be in a much more democratic landscape, where you have smaller models that are much more tailored to different solutions. And notably, you know, might be small enough for us to self host and run ourselves like llama based models and other things. So we could actually deploy a model into our own virtual private clouds that affirm and have it interact with our DMS and proprietary data set. And the data never actually needs to leave the security perimeter, which opens up a whole new host of use cases. And so one of the reasons why LEGA is set up the way that it is, it’s so when we start to move beyond the exploration phase, where we’re playing with, you know, external, Large Language Models may be using confidential data, or maybe even not, maybe just if Firm from a policy perspective decides it’s non confidential data only, when we get comfortable that we have an option to drive a solution with a different model, you can literally just unplug the other end of the API call and plug it into a different place and the user experience doesn’t need to change and the governance layer doesn’t need to change. So you have that kind of, you know, just think about the old switchboards, you know, and telephones like you have the abilities to plug the cords and the different things, I think that that sort of platform based abstraction layer is incredibly powerful. I mean, I am at an absolute zealot about platforms. And I think they’re incredibly powerful and transformative. And I can’t think of a better context where having an actual standardizing platform later is going to get to really be an absolute necessity in this crazy, crazy time. So that’s, that’s conceptually. And then to answer your question, Greg, obviously, we won’t get too detail on this. But just thinking about the future, we’ve been, as I mentioned earlier, just rigorous about trying to scope down, scope down, de-scope, what we’ve releasing to be just the absolute core kernel of what an MVP needs to be. But there is some features that we discovered from the original design that I have no doubt will be important parts of the next version, the version after that, if you think about this foundation that we’ve laid by giving enterprises the ability to kind of channel traffic to enterprise accounts through a constant layer and have that kind of that simple governance layer there. Not only do you have the ability to drive the operational benefits that we’ve talked about, from policy compliance, and the monitoring and auditing, it lays the foundation for some really interesting things like you could have algorithmic review of prompts before they go out. So you could scan them and get confidence that prophylactic competence is an active moderation experience that can happen, you know, if it’s in a highly sensitive context. I’ve already there’s some cool new ventures in the market I’ve seen last couple weeks starting to also think about this. So that’s something I think we will play in maybe proprietarily, maybe as an integration partner to other kind of algorithmic review technologies. But there’s a foundation being laid on top of which really cool and powerful things could be built and layered and very excited about that.

I don’t know if I’m allowed to ask this or not, but which models are you using?

No, I know, this is not a when I talk about like our the fact that I’m a platform zealot, we’re not trying to drive a substantive solution. The whole idea here is if a firm shows up and wants to do something, we want to enable it. Now interestingly, I expected us to be getting requests from our early moving firms for a bunch of different models. It’s been very consistent, you know, the firms want to use open AI models. Some of the firms want to hit those directly. Some of them want to take it through Azure to have like Azure open AI, but the emphasis right now has absolutely Been on the open AI complex that said, you know, we’ve got the Claude templates and the Bard templates and the other kind of big major ones, I’m most excited about what is going to happen in three to six months where we’re going to get beyond those, you know, big handful, and start having people really want to lean in and use potentially other smaller models where they’re going to have a role in this sort of platform. You know, like, I don’t know what, again, whether we’ll do it ourselves, or whether we’ll partner but there’s a great opportunity for somebody to come along and help these firms create data processing pipelines, so you can enable retrieval augmentation approaches, so you can want to hit a big foundation model, maybe you’re doing that through Azure. So you can negotiate the data retention period, or whatever the case may be, but but actually bring into the query, your own reference set of documents from your own document repository, that there’s that. It’s a really interesting opportunity. We’ve not had any early movers actually tried to cross that bridge yet, but I’m waiting, I certainly think we’ll cross it before the end of 2023.

Greg Lambert 46:00
Oh, yeah, that that is definitely the Holy Grail. Right now. Everyone I’ve talked to says all this LLM stuff is great. But how do I implement my own data into it is usually the next question.

Christian Lang 46:13
Yeah. And it’s not as easy. I mean, look, it conceptually, there’s some elegant ways to do it. But if you talk to the people, I’ve been lucky, I’m lucky to have a pretty good network of people who are doing pretty cutting edge work in this space. You know, even the people who are really well resourced, and have great technical talent are trying to do it even they’re like, oh, yeah, right. Now, you can’t do that very well, you’re not gonna get results anywhere near as good as you’re gonna get from hitting GPT-4 for example, but that’s going to change, and that’s going to change soon. So I’m excited about the change.

Marlene Gebauer 46:39
Christian, we ask all of our guests, our crystal ball question. So you pull out your crystal ball for us and peer into the future. What do you see as a challenge or a change in the legal industry over the next two to five years?

Christian Lang 46:53
two to five years.

Greg Lambert 46:54
And that’s, it used to not be so far out now. It’s like, forever.

Marlene Gebauer 46:59
That sounds like ages

Christian Lang 47:04
Alright, let me I’m gonna give you I’m going to give you three little things that come to mind. So and these are just kind of because they’re not their passion for me, like, one challenge will be around training. When I left Davis Polk, I was I was writing a book on how to be a good baby lawyer at a law firm that ended up being cannibalized and turned into a blog called Black Lines and Billables, which is still up still highly, I’m surprised to see when I go back to it hadn’t published to it in five or six years. It’s still highly trafficked, the big law circles,

Greg Lambert 47:35
Evergreen issues.

Christian Lang 47:37
Every exactly it’s evergreen content. One of the things that I used to write about passionately, because it’s the thing that turned me from a very mediocre junior associate into I think a pretty highly regarded Senior Associate is the way you kind of train yourself and hone your judgment facility, which I won’t go into the full Spiel here. But is the one absolutely essential part of that is having to truly grapple with unknowns and make choices with imperfect information. Because it’s, it’s that sets you up to absorb the learning, when you see somebody do it better and differently. We’re about to be in a world where if you don’t deliberately seek it out in new ways, you’re not going to have to grapple with those unknowns in the exact same way because of these technologies. So I, that’s great. I think on net net basis, clearly a step forward. But we got to solve the training problem to this is a challenge or change, depending how you look about it. I’m excited about it, I think we’re about to hit a whole new wave of market efficiency in legal, you know, Legal Services has historically been a credence good, it’s been hard for people to evaluate the quality of, of legal services, even after they’re delivered. Some of that, because it’s you could hide behind the complexity and the complex language that is going away. I mean, you know, somebody who’s getting legal advice today can drop something on the system that can you explain this to me in plain English? Like, how does this actually work? What does it you know, it’s really going to force us to find the things where we truly add value. I’m a huge believer, I’m a huge pro lawyer, champion legal innovation landscape, I think lawyers add a huge amount of value. But I think where that value is added is very poorly understood, particularly on the transactional side a little bit easier in the litigation context. And I think this is the thing that’s going to show us this is the lodestar, that’s going to help us find those pieces of the puzzle where we are truly adding a ton of value. And I’m really excited by that. But it’s going to be quite a shake up in the market, particularly middle market. So keep an eye on that for a change. And then finally, little one I’ve always loved I kind of left Davis Polk to start a KM focused venture very interested in in knowledge management. And this may only apply to the high end of the market in premium practice. I think there’s going to be a real challenge around strategic kind of curation. These technologies are incredible, but they are necessarily kind of a reversion to the mean. I mean, that just like it’s hard to find interesting information in Google today because everyone’s optimized SEO so well. When you can find the basics, but if you need to find it interesting, niche use case, it’s hard. The niche use cases that are weird. That is the stock and trade of top law firms like that. And that’s what you need to find from a KM perspective, a top law firm, it’s not the average example, like, you know, when I used to get so frustrated, people set up KM systems to give you the negotiated final document, that’s not what you need. When you’re looking for a document, you need the seller friendly first version and that to get stuck, you know, whatever, but, or the really weird thing that happened, we’re gonna have to find interesting creative ways to keep that strategic curation exercise alive. And that almost necessarily has to be done manually, because you kind of need to know what’s weird. I mean, there’s gonna be ways in which these systems will help that for sure. But I don’t think I think a lot of people are like, Oh, KM is dead? Because these technologies I think, exactly, not that. I just think we’re going to have a real interesting challenge on our hands to do it to do it. Well. So those are, those are three things in my in my crystal ball.

Marlene Gebauer 50:52
we could have an entire podcast just on that last answer.

Greg Lambert 50:57
And we may, we may, we may. So well, Christian Lang, founder and CEO of LEGA, thank you very much for taking the time to come in and talk with us. I think everyone here has learned a lot and probably are walking away with even more questions. So thanks. Thanks again for talking to us.

Marlene Gebauer 51:16
That’s a good thing.

Christian Lang 51:17
Entirely. My pleasure. Feel free to reach out with those questions. We’d love to we’d love to answer them.

Marlene Gebauer 51:21
So thank you to Christian and of course, thanks to all of you for taking the time to listen to The Geek in Review podcast. If you’ve enjoyed the show, share it with a colleague. We’d love to hear from you. So reach out to us on social media. I can be found at @gebauerm on Twitter,

Greg Lambert 51:36
And I can be reached @glambert on Twitter, Christian if people want to reach out and learn more. Where can they find you?

Christian Lang 51:42
On Twitter. I’m personally @ChristianLLang. And we have a new LEGA twitter handle which is @LEGA_AI.

Marlene Gebauer 51:48
I’m gonna I’m gonna follow right away. So, listeners you can also leave us a voicemail on our Deacon review Hotline at 713-487-7821. And as always, the music you hear is from Jerry David DeCicca Thank you, Jerry.

Greg Lambert 52:01
Thanks, Jerry. All right, Marlene, I’ll talk to you later.