This week on The Geek in Review, we talk with Lennie Nuara, co-founder of Flatiron Law Group, about what it means to build a talent-first, AI-powered legal practice. Nuara brings a rare mix of lawyer, technologist, operator, and systems thinker to the conversation, drawing from decades of experience using technology to improve legal work, from early portable computers and databases to today’s generative AI tools.
Nuara explains why he resists the phrase “AI-first” in legal practice. For him, legal work begins with talent, judgment, and expertise. AI enters as a force multiplier, not the driver. At Flatiron, the firm’s model was already built around flat fees, lean staffing, process discipline, and structured data before generative AI entered the picture. AI now adds more horsepower to a system already designed to reduce waste, repeat touches, and unclear workflows.
Much of the discussion focuses on M&A due diligence, where Flatiron rethinks the deal life cycle from intake through closing. Instead of throwing documents into a massive repository and hoping AI sorts it out, Nuara describes breaking work into smaller pieces: diligence questions, responses, documents, clauses, topics, closing checklists, and reports. That structure lets lawyers use AI for deduplication, extraction, clause comparison, first-pass drafting, and issue spotting while keeping human judgment between higher-risk steps.
Nuara also warns against getting seduced by polished AI output. He describes generative AI as persuasive, fluent, and sometimes dangerously average. The bigger risk, in his view, is less hallucination and more “model monoculture,” where legal drafting drifts toward sameness because models train from overlapping bodies of public material. In complex private transactions, average language is often the wrong answer. Lawyers still need to understand leverage, client priorities, risk allocation, and where to push beyond market terms.
The episode closes with a look at pricing, training, and the future structure of law firms. Nuara argues that AI will pressure the billable hour, change junior lawyer training, and force firms to rethink the traditional pyramid. He also raises a practical concern from the early Westlaw and Lexis days: the cost of the tool matters. Flatiron tracks AI usage down to the clause level, treating tokens as part of matter economics. For legal professionals watching AI reshape transactions, this conversation offers a grounded reminder: better tools matter, but better process and better judgment still decide the outcome.
Listen on mobile platforms: Apple Podcasts | Spotify | YouTube | Substack
[Special Thanks to Legal Technology Hub for their sponsoring this episode.]
Email: geekinreviewpodcast@gmail.com
Music: Jerry David DeCicca
Transcript:
Greg Lambert (00:00)
Hey everyone, I’m Greg Lambert with the Geek in Review and I’m here with our friend Nikki Shaver from Legal Technology Hub. And Nikki, you have a new premium content layer out with the litigation case management. you mind giving us some insight on that?
Nikki Shaver (00:16)
Yeah,
absolutely. Hi Greg, hi everyone. So one of the interesting things over the past year is that while when Generative AI first launched into legal, we saw a massive uptake of solutions in contracts and transaction management, as well as the broad AI legal assistant solutions. It took a little while for litigation to follow suit, but for those who have been listening for a while, you’ll know that.
sort of midway through last year, we saw a massive rise, all of a sudden kind of an explosion in litigation solutions. And that’s been really exciting. A lot of things coming out that really go beyond what was available pre-gen AI, know, things that can manage facts and provide you with insights into where there might be inconsistencies and testimony, all kinds of tools that frankly, I wish I’d had available to myself when I was a litigator. So we, as many of you know, we publish
what we call premium categories on Legal Tech Hub. These are collections of really in-depth content that allow buyers to review solutions in a particular category, evaluate them, provide the tools with which to evaluate solutions in that category. And good news is that in May, we are launching our premium category for litigation case management solutions, including a lot of these newer generative AI driven solutions, as well as
some of the incumbents that now have Gen.ai features. So log in to legaltechnologyhub.com and if you’re a premium subscriber, you’ll be able to access all of that good content around new litigation solutions. And if you are not yet a premium subscriber, you can reach out to us and you know what? You can easily become one and access that content.
Greg Lambert (02:01)
Well thanks, Nikki. It’s ⁓ good. The litigators always felt a little left out on the AI tools, so this is good news.
Nikki Shaver (02:06)
Yeah,
yeah exactly now they get to catch up.
Marlene Gebauer (02:18)
Welcome to The Geek in Review, the podcast focused on innovative and creative ideas in the legal industry. I’m Marlene Gebauer
Greg Lambert (02:25)
And I’m Greg Lambert and today Marlene, we are joined by Lennie Nuara, who’s co-founder of Flatiron Law Group and a nationally recognized authority on technology, internet law, cybersecurity, privacy, M&A and complex commercial litigation. I’m sure there’s a longer list than that.
Marlene Gebauer (02:43)
There is a longer list. He does it all.
Lennie brings a rare mix of lawyer, operator, technologist, and problem solver experience from rebuilding a law firm’s technology infrastructure after 9-11 to helping launch regulated trading platforms and now building an AI first model for legal practice. So Lennie, welcome to the Geek in Review.
Lennie Nuara (03:05)
Thank you. Thank you both very much for having me on.
Greg Lambert (03:08)
Well, I remember when we had Conrad Everhard on the co-founder, he would constantly refer to Lennie did this, Lennie did that. So it’s great to see the infamous Lennie on the show here. So, but when Conrad joined us before, we talked a lot about Flatiron’s flat-fee M&A model and how it was challenging the traditional Big Law pyramid.
With Flatiron.ai, feels like the model seems to have evolved from just this alternative fee structure plus technology into something closer to what you had referred to us before we started recording as a talent-first AI-operated system for legal work. So Lennie, I want to ask you, when you say talent-first AI-operated, what does that actually mean for
for the practice.
Lennie Nuara (04:05)
Yeah,
so yeah, because a lot of people say you know AI native and I don’t I don’t like the emphasis on the AI first. It’s the practice of law is all about talent. That’s the number one thing that anyone hires. You have a problem you want to find the best that you can afford to handle that issue. You don’t hire AI and I’m not trying to malign anybody with regard to the AI first label. That’s not the point, but the point is is its talent and its talent that has to drive the AI. So as a firm we started.
over nine years ago now, and we were essentially leveraging technology significantly before the advent of AI. But still it was talent driven and then I started building systems. have a degree in computer science, and I’ve been playing with these tools for a long long time since since the creation of that machine in the behind me. And the idea is is to take a tool like that one or AI today. It’s a long way ⁓ forward and.
leverage that tool in the practice every single way you can. And I’ve been doing that in my practice for the 42-some-odd years I’ve been doing it is where I can use tech to enhance the practice. I use it. It started with word processing later on using databases to find embezzlers and get the money back and building flat fee litigation support.
Nationwide we did flat fee litigation in asbestos didn’t really matter where, but I tried to leverage the use of the technology, but it’s always driven by talent. What we do as counsel has to be not just the in the loop. I just don’t like that phrase. We’re not just in the loop. We should be driving it so the AI in the practice. I’ll call her a talent first AI powered is another phrase that I use is.
using AI to support the lawyers, which you guys know how that works, okay? But it’s in the nature of the architecture where the AI comes up. So, transactions, which we do the most of, Complex transactions, M M&A is our first area, but we do other complex transactions, software development, commercial agreements. And we always say, okay, well, what needs to happen in this deal? There’s a repetitive aspect, and we’ll talk about that later as we dig into this.
But there’s repetitive aspect of things. Well, you know, we don’t have to type anymore the same way we used to. We can we can generate things. We can essentially break things into pieces and use AI to analyze that. There’s a variety of things that are available, and the AI is an absolutely fantastic tool. It’s also an incredibly dangerous tool in the wrong hands like like a Lamborghini and a 16 year old hands. It’s a great car. It’s not the car’s fault.
that it wrapped around a tree. It’s the kid who drove it into the tree
Marlene Gebauer (06:53)
you
Greg Lambert (06:55)
I know some 57
year olds that probably shouldn’t be driving.
Marlene Gebauer (06:57)
hahahaha
Lennie Nuara (06:58)
So
it’s a tool, it’s a great tool. And if you use it, so within our firm, we’re taking that tool, which we started with paralegals nine years ago. I was using paralegals that were not really paralegals. I called them that, but they were honestly stay at home moms that did things remotely for me and built out my database on every transaction we did. I would extract data.
from every agreement or every document, every invoice, whatever, with people typing and people reviewing. And they cost 35 to $50 an hour versus what it would have been within a firm for a client. And we just rolled that into our flat fee. And that was an example of architecture and structure and workflow that didn’t even have AI. And then as AI came into being, and by the way, it was that manually, then it was.
Done with databases, we use a lot of different tools that are commercially available. And now AI just feeds that whole process with, we call it higher horsepower, the Lamborghini level, which is great.
Greg Lambert (07:59)
And just for the people that are just listening to this, the machine behind Lennie is an old Osborne, I think they call it a portable computer, right? ⁓
Marlene Gebauer (08:08)
Yes.
Lennie Nuara (08:08)
Yes, yes, it was 25. It is.
It still is 25 pounds and I had the 37 inch sleeves to prove the fact that I carried it through law school. OK, I honestly do. My sleeves are 37 and it’s a. It’s just an example of a tool that was used in my practice early on and actually in my school and and AI is another tool and it’s a wonderful wonderful addition to our our portfolio.
Marlene Gebauer (08:11)
Portable.
Greg Lambert (08:13)
Yeah.
Hahaha
Marlene Gebauer (08:17)
Hahaha.
So Lennie, let’s talk a little bit about workflows. When you talk to people at firms, it seems that they’re trying to bolt AI onto an existing process. So it would help me do my diligence memo. It helped me do a contract summary. It’s ’s helping me with drafting But Flatiron, as you’ve noted, is starting from a different place. You’re redesigning M&A workflow.
around the assumption that AI is part of the matter from intake through closing. So what parts of the deal life cycle have had to be rethought? And where did you discover like the old workflow simply didn’t make any sense anymore?
Lennie Nuara (09:20)
All right, so that’s a great question. So let me break it apart a little bit. ⁓
Marlene Gebauer (09:24)
It’s a hard question
because I think people are having a hard time wrapping their heads around that.
Lennie Nuara (09:30)
So let’s let’s let’s do some level setting. First thing we didn’t rethink like deal team strategy or negotiation. Balls or. What the read is on a counterparty or you know what the calibration of risk is.
Greg Lambert (09:43)
You don’t have their
⁓ agent talk to your agent and then just come up with it.
Lennie Nuara (09:48)
Yeah, no, I do not. In fact,
I wouldn’t even want to try to create another agent of what is Conrad. That would be, you know, like. Yes, so you wouldn’t want another one. I always said if there was an if I had an agent, I’d still be working in my agent would be on the beach. OK, so it’s like it doesn’t always work, but so first it’s AI powered practice.
Marlene Gebauer (09:54)
If anybody knows Conrad, that’s really funny.
Greg Lambert (10:05)
Yep.
Lennie Nuara (10:11)
We’re not taking any of those away, and this is true in litigation. It’s in pure transactional work, regulatory work, whatever. There’s a core that doesn’t change, but in terms of the practice and the flow that what we’ve done is we did re-architect I like Paul re engineered the practice of the way we do deals and so from the very, very beginning ⁓ in our workflow we break things into smaller pieces. So for example, due diligence.
Most of the products that you see out there focus on analyzing the diligence, what’s in. We start even before that. We get the diligence on behalf of the clients by side, sell side. We put all the questions up on the platform. People respond to the questions on the platform and they’re tracked on the platform from the very beginning of the deal, the start. And so immediately we have all this data with regard to what’s done, what’s not done, who’s done it, what documents relate to what questions.
And then and then we take those documents and we burst them and we extract data from every single document. No different than what I did with paralegals nine years ago. We’re doing that now with AI and HI, but it’s it’s human driven. The lawyers pick what they’re worried about. What the issues are, how to extract, and they always view everything that happens in pieces. One of the things that we don’t do and we’ll talk about it probably a couple times as we don’t boil the ocean. We don’t throw all the documents into a database.
and say, OK, search this, search that. We break things down from the very beginning. There are areas, categories, subcategories, topics, the questions, the documents that relate to that stream. And then we break those down into the individual clauses or elements that are part of that request and those documents and so on. So it’s small, small, really small, tiny microscopic all the way down so that
You’re not at the word level, but you may be at the sentence or the phrase level, and now you have data points on all that. And now you can start to build essentially answers to things like requests. Reps and warranties. Closing checklists and so on. So yes, we redrafted the or recreated what we do when we do it and how we do it from beginning to end. So it starts with the diligence.
not the diligence documents, the actual creation of the response to the diligence through to looking at that diligence to generate reports, helping the client essentially produce a report that might respond to which contracts can be assigned or can’t be assigned or need consent to other touch points that may be, these are very valuable customers and so on. All that used to be done by humans and it still will be, but in a very, we’ll call it the sliding scale.
But the more volume there is, you may use more AI to extract all that stuff. It’s a small case. You may not use a lot of AI, just a little bit, because there’s not a lot of documents. But on average for us, it’s 500 to 1,000 diligence questions per deal. And that used to be handled by a lot of human talent. We don’t need as much of that anymore. That’s not a bad thing. It’s just a different thing.
No different than the mechanization of any manufacturing line. They used to build cars one at a time. They used to build houses one at a time. Now they’re being automated with regard to construction of homes, constructions of buildings. You bring in equipment, you can do it faster and better. Some of the early class might be displaced, but ultimately they’ll be pushed off to do other things and other things well. That’s just an example.
Greg Lambert (13:38)
It sounds like
one of the things that you’re doing that I know Marlene and I talk a lot about that firms still struggle with is you’ve got to start with a solid set of data to work with. if if you start with a messy processes and data, you’re going to amplify that messy process and that messy data rather than clean it up.
Lennie Nuara (14:02)
Yes.
Greg Lambert (14:04)
that’s you know you’re doing M&A you’re doing due diligence and you’re breaking down into pieces and you can apply that to pretty much any practice and that’s something that I think people need to understand you know kind of wrap their heads around how to break that into those pieces and clean that up.
Lennie Nuara (14:20)
Yeah, it flows, absolutely, it flows specifically from the talent, right? So I did work, a friend of mine, he’s in the financial services sector, not legal at all. And he saw what I was doing years ago with regard to databases and extracting data from documents, I called it turning documents into data. And he hired me on the side in a not legal capacity to help build a database for them where we were extracting the data.
and all the reports, the analytics that they were generating within the firm. And it gave them a view of something that they never had before. And it just took some time. I needed their help. said, well, what’s important to you? What are you looking for? ⁓ we’re trying to find this report we did in that. they’re all buried in 30 years of writing reports in the financial services sector. And it’s the same thing that we can do as counsel. There is data in all that we do, data in terms of the steps.
but data in terms of the things that we’re moving around, the words, the paragraphs, the documents, that data can be captured and used in a variety of ways to improve the practice of law and the accuracy of what we’re doing on a go-forward basis down that transaction scheme or through litigation or other.
So, so one last piece of that. So it required taking a step back and the step back happened in the following manner, right? Conrad was doing deals for his whole career. I was always riding on the outside of deals. We were at different firms many years ago, but we were friends and we joined forces when we formed Flatiron. He’s like, hey, Lennie, can you help me run these deals? And I’m like, okay, I got my practice, but I’ll help you do the M&A stuff. You know, I said, yeah, we’re going to, we’re going to flat fees. I’m like, okay. And then I look.
Greg Lambert (15:37)
Cheers.
Lennie Nuara (16:03)
at the way he did it. And it was incredibly inefficient because they would touch the documents once and then touch the documents again and touch the document. So you know, during pre before the LOI or during the pre-LOI phase and the diligence, then during reps and warranties, we look at the documents again, then we look at them again for closing checklists and then post closing integration. I’m like, this is nice. You’re going to go look for that same documents four times. So I started reengineering the process out of it. You know, basically the desire to be more efficient and then allowing us to really hone that.
those flat fee quotes. And that just required, you know, a closer analysis of what’s important and what’s not. And as I said earlier, that can happen in any domain, right? It can be transactional work, it could be litigation, could be regulatory. It’s just that if it’s driven by the right driver, somebody that has the intellect of what happens, and they take a step back and say, hey, is there another way to do this that’s more efficient? Then it should be applied. And the tools…
I’ve gotten significantly better where you can do that. Now I did it initially with paralegals and databases. Now I’m doing it with AI and databases, but still databases because everything we do as lawyers lawyers don’t want to hear that. But everything we do as lawyers is data driven. It really is. It’s we’re not. We’re not significantly different than than the street. I used to work on Wall Street, but we’re not significantly different.
Greg Lambert (17:21)
Well, I mean, so far in all of this, I mean, we’ve touched a little bit on the periphery about what you’re doing with the AI. But really, a lot of your foundational work here was, again, cleaning up the data, getting your processes right, understanding how many times you need to touch a document, and reducing that overall.
So as you develop this M&A tool around that style of model, now we want to understand how do you bring in the AI part of it? Are you looking at…
kind of compacting the different steps or how are you throwing the AI at the process that you’ve already seemed to have made very efficient.
Lennie Nuara (18:10)
Okay, so in the first instance, there are different ways to use the AI. So I wouldn’t want to say it’s all AI, it’s AI in this style here, in that style there, and another style someplace else. So for example, you can use AI to de-duplicate, for example, all the due diligence questions. In the 500 to 1000 questions, I guarantee you there’s like at least 10 and sometimes a 30 % overlap.
which is nuts. And I’ve had clients literally just collapse under the weight of that. And so just deduping things. And actually I have a scale and the scale runs from exact match to similar to, ⁓ you know, not exact, but still on the same topic and so on. And I can present that. And then I push it back into the due diligence layout that we have so that the buy side sees
by the way, this is the same question you’ve asked now four times, but it refers back to this question. And so our answer is going to be different. But then we, know, everybody can see that and they see the numbers. So that’s one way. If you’re drafting documents, it’s another use of AI. Much higher risk profile than finding duplicates, right? If you miss a duplicate, OK, someone says, damn, I got to answer the same question twice. OK, OK, or if you point to something as a duplicate and it’s not.
Greg Lambert (19:08)
you
Lennie Nuara (19:33)
somebody comes in and says, hey, no, they’re different questions. So please answer them both. Nobody so far has said, please answer them both even though they’re the same. Usually we just point them to the other answer. But if you’re drafting or if you’re contract lifecycle management, you’re doing review, or you’re trying to go against a model like a playbook or something like that, the risk is significantly different. And you have to know that you’re using the tool differently for different things.
So in the first instance, we apply AI wherever we can. It’s the very simplest thing of organizing the information during the due diligence process of collecting and analyzing. But then as you go further downstream, you have a higher and higher risk value that you would place on a potential mistake or the misuse of the AI. And so I look at that as still very valuable, but how do I use that?
We use that tool, AI, from let’s say comparing clauses or using the tool to help draft a version of a document, and we evaluate that the lawyer that’s working on it can look. So for example, closing checklist has 30 different documents, 50 different documents that have to be generated from the assignment consent letters to FERPTA letters, these letters that have to go out to confirm certain regulatory compliance.
whole litany of things. Some of those letters are standard fare. They really are. And if you can extract the to and the from and the section of the agreement and so on and so forth, which then can be eyeballed by a partner or an associate, that they’re correct and they can go out, that’s great use of the tool. In another realm, if you’re actually drafting the master agreement, that’s a lot more difficult ask of the AI.
And maybe you will, but you’re going to give it a lot of feedstock, a lot of documents that will essentially frame up what you’re looking for. And then you’re going to have to have a really serious analysis of the quality of it. I found that, this is two years ago, let’s say, you could look at a document that’s generated by AI. I swear, it looks fabulous. It reads incredibly well. And as counsel,
We get sucked into that. It’s like, oh my God, this is done. We’re done. You take a step back and say, wait a minute, it didn’t address this. It didn’t address this. It did it backwards. It’s like, oh my God. Oh yeah, it said something really well. It’s the art. It’s the articulate con man. And I used the phrase last year at Legal Innovators. Everyone’s hot in the back room getting high on gen AI. It’s like, oh, this is awesome.
Greg Lambert (21:55)
Didn’t actually say anything. But it said it well. But it said something well. I’m not sure what that something is.
Marlene Gebauer (21:58)
It said something, but it didn’t say everything you needed to know. Yeah, yeah, yeah.
Greg Lambert (22:14)
Ha ha
ha!
Lennie Nuara (22:15)
Holy cow. Wow. It’s like, wait a minute, wait a minute, you know, like, you know, I’m not experimenting with drugs. I really know what I’m doing. No, no, you’re not. so AI can, can have that tendency. So as you move down the spectrum of, its, of its use, if you recognize how it’s going to be used, and then you take a moment. One of the things that we do is that we break things into pieces. I’ve mentioned it before, we parse and break the smaller the pieces that you give an AI.
Greg Lambert (22:24)
Ha ha ha ha.
Lennie Nuara (22:44)
The higher likelihood of success that you will have with it, the bigger task that you give it, the worse it will be. And the reality is, is that’s the same with humans. If I give an associate 10 things to do, some of them are going to come back wrong. Okay. If I say, just do this, just do that and so on. And I’m, I’m been known to be a micromanager as you can probably already tell. And I get a lot of grief for that. But the reality is, that when you micromanage,
It takes more of my time, but ultimately the product and the training to the student is or the mentee or the or the associate is incredibly more valuable. Yes, they can. Wander in the in the forest on their own for hours or weeks at a time and and produce materials that then I would edit and so on. But I find if you do things in pieces, it works out much better with humans and it works out much better with AI. I thought it lost more from a long time and I’ve.
used to run hiring at some of the firms I was at, so on. So it’s a big thing to me to help bring up the youngers and bring them through. And it just takes, and you got to deal with AI the same way. And so they’re parallel.
Greg Lambert (23:50)
How do you, Lennie, as
a self-proclaimed micromanager, how do you know when to stop? It’s almost like doing research. When do you know when, okay, think that, at least for right now, this is where we need to stop, because otherwise we’re getting diminishing returns on.
Lennie Nuara (24:11)
I
can’t give you a quantum. That’s a quality issue that the H.I., the human intelligence factor, is critical on. I will tell you that a first, second, third, or fourth year will say, OK, it’s done. Then a fourth, fifth, sixth, or seventh year will say, no, no, it needs to be fixed. And the partner says, you’re both wrong. It’s still not done. And that comes from wisdom. And it’s the same thing. I would treat A.I. the same way you treat young associates.
but also break it into smaller pieces. So that way you can essentially trust but verify, right? You can trust them to do something, but if you do it in small pieces, you’ll find the mistake in that one place. So maybe it’s finding the mistake in the hallucinated citation or in the logic that someone missed. I have a whole other big speech and article that I’m working on about, it’s not so much hallucinations that are the problem. It’s a race to the mean with regard to the use of AI, which
Maybe maybe you’ll ask me a question later. I’ll let that pop out later.
Greg Lambert (25:14)
Ha
Marlene Gebauer (25:16)
I like how you were describing this. Like if it’s, if it’s more complex, you know, you have more chance of, of problems using AI. but I, I, two things like, think it’s, it’s sometimes challenging to, for, for attorneys or to explain to attorneys, like certain types of things. Like are more complex than other types of things and that you’re going to have more of a chance of.
of not getting the results that you want doing one thing versus doing another thing. And so I’m curious how you make that determination or you explain that to people that are kind of working with you and working with the tool. And also what about sort of a genetic workflows? And I mean, is that tackling some of this because you are able to take a something that is more complex and kind of break it into steps.
Lennie Nuara (26:08)
Yeah, so I’ll do the second half first. So breaking things in the steps is what I was talking about, right? And you can do that with agentic components. The key, my perspective, is to stop and have HI, human intelligence, in between the steps. Many people are building agentic workflows. Great, OK, but there’s no verification opportunity between the agents. That’s no different than saying you’re not going to look at each of the steps from the first associate.
the senior most associated junior partner. That’s just a recipe for disaster. You break things into pieces and then you can verify that. It’s hard to judge upfront all the steps, but. If you can reward the senior talent, the partner with the ultimate. Goal of more efficiency later, they’re going to have to invest more time.
early to break down their process into smaller pieces. They know all the steps. They know where the issues or the problems will erupt. And if they take the time to look at their process with a critical eye and break it into pieces, that’s an efficiency hit on them. They’re not efficient, but hopefully it will be a multiplier for later when they invest the time now.
⁓ It’s no different than investing time in an associate. You invest the time now and build the process in small increments along the way. Then you can build out the technology again with verified steps in between to build it out and create reliability over time. But it is a time sync and that’s something that you I’ve spent an inordinate amount of time the past couple of years. So you know building deal driver and dealing with Megan Ma at Stanford Deal Mentor.
which is a negotiation simulation that’s agentic based and so on. And the amount of time that we’ve been devoted to those, and I’m not bragging, it’s just, if you want it to be right, you must spend the time. And I have the flexibility because I don’t have the labor stack that existed when I was a partner at Greenberg Troward or Thatcher Profit or any of the other firms I worked at. I can do that. And if I was at a firm, they’d either say, okay, go for this, we’re gonna switch your role, your numbers are gonna be different and so on and so forth.
hopefully we’ll generate efficiency from live. But we just we did that investment into our firm and into Dealmentor. It’s a hard thing to swallow for many firms. don’t begrudge them at all. It’s hard if you’re at a big firm and you’re grinding through and you’re making your numbers and you’re doing well. Why switch? You got to be kidding me. I’m not going to switch that. I’m not going to change my comp. I don’t want to. I’ll help a little bit. I mean, I remember I was laughing at it.
conference I went to and the firm, which remains nameless, was bragging about the fact that, you know, they give whatever 50 hours a year or 100 hours a year to the associates to think of operationally how to make things better when I can do that in a month. I could spend an extra 200 hours in a month. Sounds insane, but I will. And I’ll do that because it’s giving us tremendous operational efficiency later. But I can do that.
Firms need to look at that and they can do that with their ops group, but at some point the real talent needs to spend that time. And that’s hard to get, it’s understandable.
Marlene Gebauer (29:30)
So you were mentioning this sort of judgment calls and some of these AI enhanced ⁓ workflows. So the HI, why it matters, what’s the next step layer that sits above what the AI is actually doing. As you’re designing the M&A tool and a broader type of AI-powered workflow,
I know you say that people know like, know, seasoned practitioners know what the steps are, but also you’re saying that sometimes it’s hard to figure out the steps. And I have, I have experienced that too, because it’s like, it’s in your head, but when you have to sort of document each step, that is a little more challenging. And then at that point, you know, when do you decide, you know, AI can take the first pass and then when does human judgment, you know, have to set step in, like, you know, where are you going to draw the line?
between the AI assisted execution and where the business judgment comes in.
Lennie Nuara (30:27)
I think it’s again, it’s a great question and keep going back to it’s in parts, right? So and by the way, a trick can be if you spend time with a lawyer, they want to know, I’ll throw myself back into the old days. They want to just dictate out the flow of a deal from beginning to end. See it written down once, just dictate it or or tell it to someone and that they can take notes. OK, do it a second time.
Let them fill in more, do it a third time. Okay, now put that into AI and say, write out this process and show me the steps in the process. You can put that into the AI or then it’ll give a 10 page or a 20 page list of the tasks that have to be accomplished. And you keep feeding that. That’s not a risk event. Someone now can look at that flow and say, but they forgot this. They forgot that. They forgot that. Lawyers are really good at finding a problem with your stuff.
not telling you in advance what the problem will be, but reacting to something on paper. So that’s just a little trick that I use to force myself saying, I know what I want to do. Bang, bang, bang, bang, bang, bang. I’ll get five things and then I’ll put it in the AI. And then I get 30 back and I’m like, but you missed this, this and this, you idiot. You’re useless, Mr. AI. And then I put more in and all of sudden the AI, because they’re sycophantic, will come back and say, ⁓ good catch. Let me add those ideas now.
which is a good, it’s a wonderful experience, but they suck up to us so much.
Greg Lambert (31:51)
done. You’re right. You’re right. There are 10 Rs in strawberry. Well,
Marlene Gebauer (31:56)
ha.
Greg Lambert (31:56)
it
sounds a lot like what we’ve had Wendy Jepson from Let’s Think On where she talks about
taking advantage of having the partner walk through, talk through that process multiple times without realizing that they’re talking through the process multiple times. it’s an art, think.
Lennie Nuara (32:18)
It is, it’s an art. It’s, it’s not significantly different than your art. Okay. When interviewing me, okay. You guys spent some time in advance things you wanted to cover. You wrote it down, you created your outline and now you can ask me the questions or preparing for a deposition. They know what they have to get to. They got to nail things. Okay. I remember the early deposition I took. I didn’t even go at all into the concept of damages. All I do is focus on liability and the partner said, well, okay. And so what were their damages? I’m like, Oh.
I didn’t get into that. You know, I a second year associate taking one of my early depths and big lesson to learn. Okay. can cover it all, right? But you got to think in advance. So you’ll do that, break things down, but then applying the judgment of, of what you use for, for what comes down to, well, what’s the risk factor for that? As I said earlier, right? The risk factor for deduping something is significantly different than the risk factor for directing the indemnification provision with regard to an M&A deal.
⁓ or with regard to, let’s say, the risk profile, and that will have an impact on the reps and warranties and the disclosure statements and so on and so forth. The drafting of those things are critical. And yeah, you might make something to a first pass. It’s very easy for some partner to react to the first pass, great, or even an associate can react to the first pass and so on. How do you pick where those issues are? It will vary significantly. I can’t give you a magic wand and say, works, and I know the AI doesn’t.
I know it’s a good first pass on many, things, even a second or third pass. can, you know, I joke with my wife, you know, I can say, Claude is like the greatest associate I’ve ever had. Never complains, never, it’s never, never late, always on time, delivers things in minutes, not days and so on and so forth. But it’s, it’s imperfect. And I accept, I accept that. That’s okay. you know, and it, it will not make the judgment call. It will not. it will make an offer to me, but I just treat them as an associate. But
The deeper risk when it comes to drafting, drafting isn’t hallucinations. I don’t care about hallucinations. That will ultimately clean up over time. It’s conformity. It’s this constant, all the models are converging at the lowest level of commonality. So you’ll get what everyone else did. That’s not what you need in your deal. I said in the very beginning.
Greg Lambert (34:33)
It will own the
mediocre.
Lennie Nuara (34:37)
Yes, and
that’s a much bigger risk. OK, now you’re you’re essentially abdicating your responsibility as counsel to constantly give your client what everybody else gave. You have leverage and both sides have leverage. The buyer wants to buy and the seller wants to sell. But there are points that are different for each one of them. One of them might say, I don’t care about the indemnity. I know my cap table is clean. I don’t care. I’ll indemnify up and down, left and right.
And yeah, we had a cybersecurity breach, but I know what the breach was and I’ll indemnify it for that. just had that and it’s happened to me, you know, numerous times with clients and say, that’s fine. And then, and all of a the buyer’s like, the full value. Yeah, sure. The full value of the transaction. I’ll indemnify. Okay. So you, that is a judgment call, right? But you can do that on a item by item basis later on in the transaction as the, cause the risk gets higher and higher and higher all the way through.
Marlene Gebauer (35:23)
you
Lennie Nuara (35:34)
Yeah, an early problem, you know, in use of AI, it happened. But when it comes to drafting agreements, okay, ⁓ and producing output, you can’t expect the machine, AI, to leverage your client’s position. You might say to it, hey, we are going to take a hard stance on X or Y or Z, but you have to drive that. Again, talent first, okay, and then AI second.
Essentially, you’re never going to get the tall blade of grass out of an AI. You’re going to get the nice, smooth, Augusta level golf course. There’s not a single blade of grass too high or too low. They’re all the same. Marvelous. That’s pretty, but that doesn’t help your client. You’ll give up on certain points. You’ll have to get others. Some clients will walk away. I have people that have walked away from deals. And that intelligence, that wisdom really changes
the use of your AI, if you recognize it, an associate won’t, maybe some, but most won’t. And partners that are time-pressured might not see it instantly. When they take a step back, take a breath, and then they read the output, they’ll be like, ⁓ man, this is slop. This is not helpful. And the problem is that my role is often, okay, you’re right.
Greg Lambert (36:46)
You
Lennie Nuara (36:54)
It’s slop right now. Let’s go back and ask for more pointed answers on X, or Z, or pointed drafting on X, and Z. And the slop then becomes better. The point is that the device, AI, writes really well. But it just doesn’t know what to write. So if you say, it to me, it will give you the standard. If you recognize the standard is not what you want, then you have to drive it to give you the in.
Nice pros, nice pros, but you have to tell it what you want to drive for.
Greg Lambert (37:27)
Are you giving it like playbooks to help it get a little better at the mediocrity?
Lennie Nuara (37:34)
Well,
I’m not a big fan of playbooks because again, it’s that’s another version of mediocrity at some level. And two, we’re not a corporate, you know, we’re not an enterprise. OK, if I was representing the same client all the time, always I might I might revert to that, but that’s just not what our practice is. The deals are somewhat unique all the time, but I will give it a stack of things that we’ve done that push the envelope a certain direction. Let’s say, you know.
The calculation of the matter, but the there’s certain calculations that have to happen and you want them to break a certain way. So the working capital calculation is really what it is. It has to break a certain way.
Greg Lambert (38:12)
Are
you finding the AI is getting better or worse when you give it things to do?
Lennie Nuara (38:19)
It’s right now it’s about
the same no matter how many times and sometimes I play one against the other. Our platform lets you use four if you want to put another AI on there. We can just put in API and off we go. And you can play against one another, but from what I’m reading from the various sources, they’re all being trained on the same corpus. Much of it is the same purpose and particularly with regard to private transactions opposed to public deals.
Very troublesome, there’s not a lot of data on the private transaction. So your experience base is really great. So that’s where we’ll, we do the old fashioned way. We pull our old deals and we put them in and it’ll create a first pass, but it is still literally just still a first pass. That may change over time in larger firms than mine. You know, the mega firms, the big law, they have a corpus that they can point to that are significantly larger than what we have.
And they might be able to create, you know, through RAG, right? Retrieval of a generation. They can push that data in for drafting purposes. That would be great.
Greg Lambert (39:18)
So with flat iron, mean, do you have, since almost day one, you’ve looked at putting pressure on both the bilbel hour and the staffing pyramid style that you see.
in Big Law and it’s kind of interesting because right now everyone in Big Law is talking about how they’re anticipating a change in the model but at the same time they’re now recruiting 1Ls before they even take their first semester exams. it’s almost like they’re doubling down on the existing model while knowing that there’s a change on the horizon.
So how do you see the, I guess with the M&A practice specifically, is there like a new apprenticeship model? How are you seeing the industry bringing along not just new talent but existing talent as the models seem to, I think they’re gonna change, they’re gonna have to, I think.
Lennie Nuara (40:17)
I think they will. In the first instance, we are extremely lean as a firm, right? We don’t have associates. We will bring them in. There’s lots of talent that the big firms have trained for us that are sitting that don’t want to be in the big firms for a variety of reasons. Usually we’re not, we don’t bring in very young associates, but lately I’ve been building at least internally a model out where we will start with
Basically, the idea is, you know, what is it? A barbell. You’ll have a ton of senior talent here. You have a ton of very young lawyers here and in the middle you may have a thinner path. The young the young side at one sense, let’s say one to zero. In other words, they’re not in law school at all, but with one to three years. So you have talent that you can utilize that are smart people, but didn’t want us, you know, drop 100 or $200,000 for law school.
But they’re very smart, OK? And you can call them paralegals or whatever. And I think we can build a model with if I’ve done it on paper and I’ve had these people working for me, ⁓ the paralegals that I’ve used and it worked out quite well before AI. Where you spend the time letting them essentially trudge through the mud doing an amount of work that is. Something that will give them an understanding of.
the legal issues or the things that they need to spot. And they spend time not on AI to do those things. So they have to walk through the mud and get dirty and sweat it out. And some of them will progress into the practice and so on for later years and so on. The middle years, three to 10 or whatever it is now, the bigger firms, I think unless they adapt quickly,
are more challenged because a lot of what their work was doing was overseeing the various, you know, the years below them. So I think if we spend more time at the front end training, we have our deal mentor, ⁓ school that we did with Megan and other methodologies where we train the very earliest ones, but you train them, I’m sorry, the old fashioned way, where they’re not dependent upon the AI so that when, they are using it, they
can learn from and take some of their hard earned knowledge and wisdom and then apply it and use it to oversee the AI that they’re using. Now you can apply what I just said to fourth, fifth, sixth, seventh and eighth year lawyers now. The difficulty is to break them out of the mold that they’re in now, which is the traditional pyramid. And that’s a risk issue that firms have to face that they don’t want to.
Look, firms don’t like flat fees because there’s risk, but we take it all the time. I don’t care. And so, you know, I don’t make as much money. I make this versus this, but I still made this. OK, so it’s just the height of how much and how we value our time and so on. So the flat fees at big firms are harder to do because no one wants to stick their neck out and take risk that the thousand hours that they build across seven people really was replaced by X.
And now what are they going to do with those bodies? And what are they going to do if you know? In a variety of ways, how do they handle? Hey, you don’t have the thousand hours, so what you but you finish the deal so you don’t have the hours the hours weren’t built. Am I going to get compensated? That should drive them towards a different fee structure. But right now it’s very, very difficult to. It’s very difficult to, know, to do a U turn in Queen Elizabeth 2. OK, you just you can’t just.
turn around, know, the steamship takes a while to turn. And I think that’s a problem. But again, it’s people hired talent.
Greg Lambert (43:47)
I got an unrelated, well,
I got a semi-related question to this. And I think it’s one of the things that I’ve been talking with other firms. When it comes to AI, I think the last couple of years, there hasn’t been too much worry about the amount of money that we’re spending on the AI tools. But I think you are probably seeing it as well that the token
costs or in the amount especially as you may be throwing more agentic processes in there. I mean there’s some software companies where people are spending as much as their salaries and just token costs. So are you somewhat afraid that you’re just shifting price, know, cost of an associate over to the cost of the tokens?
Is that something you’re thinking about?
Lennie Nuara (44:43)
Well, yes, I definitely think about it. If you were to. Go on deal driver, there’s actually a tracking of the cost per clause. Her model that you’re spending so you can call up that you looked at document X and you can see that you’ve ran 14 different agentic flows and what the cost of each run is for the clause in that one document. And then there’s an aggregate of what your spend is across.
the full document and then across the entire screen of all the documents. And the reason being is that, again, I come from a long time ago where legal research was incredibly expensive. And on a weekly basis, maybe when I first started practicing at 84, actually no, 86, because the first firm I was at didn’t have Westlaw or Lexis, but I got to a firm in 86 and they did. And for a couple of years, there’d be a partner running down the hall screaming.
That four page memo cost me $9,000 because of you. What did you do? Because they didn’t know how to do Boolean searching or so on and so forth. And I’ve never forgotten that. And the cost of a tool is part of the economics of the transaction and how you quote. So we track it literally to the clause and you can use any model you want. You can use three different models on the same clause to do compare.
Marlene Gebauer (45:43)
Ha ha ha ha ha ha.
Greg Lambert (45:46)
Ha ha ha.
you
Lennie Nuara (46:07)
And you can track all that. the reason is exactly what you just said, because I see the change. Now, my experience has been at least the past two years, the token cost is going down, but the usage of how many agentic events that you have is going up. So I wanted to track that and it’s, been fine so far. It’s not out of line. And I do believe overall, it will be significantly more efficient than the people.
doing that work, whatever that work is that we assign. Significantly so, like by an enormous factor. It’s like, you know, a hundred to one, but it’s still, you have to recognize that that’s a cost and many vendors do not expose that. They just give you a bill ⁓ and it’s, know, yeah, you used our platform and your AI upcharge is, and you have no way of knowing, was it Marlene? Was it Greg? Or was it Lennie or someone else that did that?
So.
Greg Lambert (47:01)
It was Marlene.
Marlene Gebauer (47:02)
I’m listening to him I’m getting flashbacks of the many conversations I’ve had with people about like, yeah, you spent this much money to do this. It’s like, because you didn’t know what you were doing.
Greg Lambert (47:08)
your Westlaw bill was $50,000. ⁓
Lennie Nuara (47:13)
Right. And it’s,
and it matters. It’s the idea is, is to give people all the tools necessary to be more efficient, not just, you know, this blanket, I’ll get, throw everything here, get an output there and know where we’re going. It’s going to change. going to, it’s changing every quarter, let alone sometimes every other week. But I mean, honestly, it’s, it’s ramping and it’s great. I love tech, but let’s recognize what it is. It’s a tool and let’s manage that tool and our talent and then produce better results.
Marlene Gebauer (47:41)
So Lennie, you mentioned monoculture monoculture a little bit earlier. Can you expand upon that a little bit for us?
Lennie Nuara (47:49)
Yeah, and I think I touched on it a little bit. It’s the model model culture is basically saying that, you know, all the models are pulling a significant amount of their content from the same sources. Example, they’re all looking at the Edgar database from the SEC for contract clauses. Yes, they can have others, most of them are not. And over time, if you’re drafting based upon those, it’s a drive to mediocrity. And if you don’t recognize that issue, it’ll
bite you in the ass at the end. That’s the problem is that, you think this is a good example. This is a good document. As I said, it’s written well, but that’s market. Okay, but you don’t want to be market. You want to use your leverage in your deal. It’s a way of thinking that we get paid for. Get me the best deal you can get based upon my circumstances. Don’t get me what’s standard. Now,
You know, in the VC world, for example, there’s the NVCA, the National Venture Account, that’s your capital association that has standard agreements. But if you look to see what’s actually done on those deals, they’re all tweaked. They start with that, but then they all tweak them. And they’re not all filed, by the way. Some are, but they’re not all filed. ⁓ And so this concept of, you know, all the models giving the same answers, yes, on our platform, you can actually run
different models and different versions of the producer. So various versions of Anthropic, various versions of OpenAI, various versions of Gemini. I usually pick the best one and just spend the money, but you can pick and choose what you want to do and see what results you get. But you want to see some cross and you want to see differences. And if people aren’t cognizant of that, I think it’s a greater risk to our wisdom. Our biggest issue in our practice is our wisdom is our value.
Our talent is our value. If it’s just cookie cutter stuff, well then fine. You don’t need, you know, super elite lawyers. It’s truly cookie cutter and you’re not, and you don’t want to be working on that work anyway. I think all of us are at firms that are doing more difficult work than many other firms. It doesn’t mean that the other firms are unimportant. They aren’t, but they don’t need to be spending at that level. There’s lots of other issues. But if now all those big firms are reliant on, you know,
LMS models that are all pulling from the same base of agreements and you’re expecting them to give you the spin that you need, don’t. Expect them to give you what is flat, know, mediocrity. And mediocrity might be okay for certain clauses, but not for most. It might be fine for, you know, getting an assignment letter, you know, out to the landlord to get their consent on something. Okay, there’s no risk there. Does it ask for consent? Does the person sign it? Yeah, you might want to…
in a couple of the clauses. don’t want to miss that issue. This monoculture thing is, and people call it model monoculture. I’m not sure I really love the word, I adopted it because it’s out there. There’s research that’s been done that says that people are missing this completely. There’s research in Stanford and other places. think it’s called, there’s two labs that I’ll get a lot of information from, Dr. Nick and Maz’s lab.
which is a Lift Lab at Stanford and then Reg, I think it’s Reg Lab, I forget. But Stanford’s got some great stuff. Some of the other schools have it as well. You gotta be looking to see where the models are. because that’s basically the associate pool that you’re pulling from. You hire associates because there’s a spark or something in them that you really wanted. You don’t just say, me a first year, give me a second year. That’s not what you hire. You hire somebody for.
that spark that you see that they’re really going to be good. They wrote something creative. They write really well. Okay, well, the model can do that, but they wrote something that I didn’t expect. There’s, I used to look for the fire in the belly of the associates that I was interviewing. I wanted to see that fire. If I didn’t see it, I’m like, great kid, great statistics, you know, great grades. They’re not going to do it. They’re not going to cut it.
Marlene Gebauer (51:51)
So Lennie, you actually got a little ahead of my next question, but I’m sure that I know you’re huge reader, you’re a huge ingester of information, you’re a lawyer technologist. So you mentioned a couple things that you go to for staying ahead of the business of law, but what are maybe some other go-to resources that you use?
Lennie Nuara (51:50)
You
Greg Lambert (51:56)
Hahaha
Lennie Nuara (52:15)
one of my favorite things to read is the information, which is a newsletter of, from Silicon Valley. ⁓ they track all technology companies, all the latest, but not just the startups, which they do. they, they’re not legal tech. They’re just tech. spend most of my time reading about tech. Cause I buy tools, right. And I use tools. don’t care if they’re legal tech tools. They’re just tools. I like databases. know, most lawyers don’t care to read about databases. I did. was, I was an early adopter of.
than after that, Airtable. I thought Airtable was marvelous. It’s a relational database. You say that to most people and they’ll just fall asleep. They don’t care. They don’t want to know. But I track, so I track the traditional technology sector very, very closely. Between that and LinkedIn and my feed on LinkedIn, those are two go-to sources. But I also track like Bloomberg and Wall Street Journal, their coverage on the tech industry and trading. Why? was trading.
essentially funds the tech industry, which then funds the innovation. So I look for that, I don’t want to say virtuous look, but that relationship really matters. Obviously I mentioned, know, Megan, who’s a great friend and our partner on Deal Mentor, but also the work that she’s doing out of the Lyft Labs is fabulous to see what’s coming and essentially frontier work. It’s about frontier, you know, models.
Marlene Gebauer (53:17)
relationship.
Lennie Nuara (53:38)
But her work is truly on the frontier, which is just so much fun. spent, I see her a couple of times, six times a year, I’m out there working with her on deal mentor and other stuff. Other things, tech industry publications like PC Magazine all the way through to Info Week and the cyber security journals that are out there. And then the more mundane for the legal profession, I looked at the sanctions, ethics litigation.
where someone hallucinated the letters, you know, on law.com and others that, know, what happened to the unnamed top five law firm recently with their filing, ⁓ with hallucinations in it. ⁓ and the other ones that were down in Alabama and so on, those are informative. find it laughable though, that they’re complaining so much about hallucinations. Do you know that early on for email that sending an email was a violation of the ethics opinions? Which I was just like laughing. I was like,
Greg Lambert (54:32)
Thank
Lennie Nuara (54:36)
When I saw that, I’m like, my God, you people are so, look, I come from a world where there were these, remember these things, okay? This is what fed the machine behind me. Of course I did, because it’s right behind me. But you know, it’s like email, my God, Lennie, you’re sending an email, that’s a violation of our ethics, I’m like, no, it’s not, I don’t care, I’ll take that fight, nobody ever sued me. So I’ll take it, but you know, so, but you gotta track that stuff.
Greg Lambert (54:41)
Good. 5.25-inch foppy disc. into your Osborne.
Marlene Gebauer (54:44)
like that you have them handy because you need it for that.
Greg Lambert (54:58)
All right,
we peaked into the past with your 5.25 inch floppy disk in your Osborne portable computer. So now let’s peek into the future with your crystal ball. So what do you think over the next short period of time that the legal industry needs to be prepared for? What’s your take?
Lennie Nuara (55:09)
Hahaha
goodness.
Well, I already mentioned the model model culture that that is critical, and it depends on where you are in using the stack, right? So if you’re using it, the pedestrian stuff, you’re not going to be worried about if you’re using it for CLM, you know, contract, life cycle management, contract review, drafting. You have to be worried about that. You have to be looking much more carefully at that. So and if you’re not, you’re toast. I mean, like you’re really in trouble. You’re missing a big issue bigger in my mind than hallucinations.
And so we’ll see, it will probably present itself when you see more of, it’s not slop, but it or more mediocrity in the output that you’re getting from the machine. that’s item one. Simulation training, that’s not just a plug for Deal Mentor, although it’s a great product, but we need more simulation training to create a more engaging environment for younger lawyers to learn. And AI can do that. We created dialogue, you know, it’s,
It’s a simulator, but it’s not based upon any prewritten dialogue at all. Okay. The dialogue comes from a language model and it’s innovative beyond words. And we should see more of that. Hopefully just everybody will buy a deal mentor, but I’m not here to sell that. The point is, is that you need that kind of inspiration to get people to really learn better because otherwise we’re going to be skipping a chunk of years and leaving them behind. And we don’t want that.
Private equity is gonna drive the spend at law firms, because they see this innovation. They’ll push harder than the enterprises will, although enterprises will push, but private equity will push even harder. They’re much more expense focused, and they’re gonna say, are we spending X, Y, or Z on counsel for whatever it is? Any kind of operational expense of lawyers, when they know they’re using it to write, you know.
essentially memos on multi hundred million dollar acquisitions. They can do that. Why aren’t our council using that? And so that will drive a spend cycle that’s going to compress the big firm ability to bill. And then maybe they’ll switch to flat fees or otherwise. And then, you know, I don’t want to say the pyramid breaks, but the pyramid
You may use those bricks in not a pyramid style. You may have silo, silo, or I said before barbells or whatever. I think there has to be a change there. It’s going to change. AI is going to absorb a lot of junior level work. The only other thing that might happen that will ameliorate that is just the nature of the practice of law. When I started practicing 40 some odd years ago, 84, there was nowhere near the amount of regulatory practices there is today.
the regulatory practices has exploded as a percentage of what a law firm does and the complexity on every transaction around the regulatory sector. No one really anticipated that, but we kind of grew into it. The existence of AI will help with regard to that. But the point is, that everybody thought, well, know, the practice of law will get more and more efficient and so on and so forth. It didn’t really, it created new sectors of things to look at.
⁓ And so smart firms will figure out that maybe we have to shift people around and train them differently and so on. I don’t think they’re going to break, but what people do and how they do it is definitely going to change. And the smart firms will figure out how to repurpose people in different ways, change the dynamics and so on. I think if they continue to do what they’ve always done, which is put talent first,
smart minds will figure out how to deploy that talent with the right tech. I’m not trying to undermine or speak poorly of tech because obviously I love tech, right? Okay, but that’s where it will go. Ultimately is that if they’re smart, they’ll adopt it and use it properly and successfully. And if they don’t, the dinosaurs will die. mean, although the dinosaurs did rule for, you
a couple of hundred million years and we’ve only been here for a few. So anyway, so.
Greg Lambert (59:26)
They had a good run. ⁓ All right.
Well, Lennie Noir from Flatiron Legal, want to thank you for coming in and nerding out with us today and showing us some of the old tech and then peeking into the future with us.
Lennie Nuara (59:45)
Thank you so, much. I really appreciate it, Greg and Marlene. It was really a fun time. Let me blather on about things. I appreciate it and look forward to seeing you guys more on this great podcast.
Marlene Gebauer (59:45)
Thanks, Lennie.
Yeah, well, thank you, Lennie. And thanks to all of you, our listeners, for taking the time to listen to the Geek in Review podcast. if you enjoy our dive into this data, please share it with a colleague.
Greg Lambert (1:00:07)
And Lennie, if people want to learn more about you or Flatiron, where’s the best place for them to go?
Lennie Nuara (1:00:13)
Our
website is now flatironlaw.ai and my email address and contact details are there. So it’s flatironlaw.ai.
Greg Lambert (1:00:22)
catch.
Marlene Gebauer (1:00:23)
And as always, the music you hear is from Jerry David DeCicca Thank you, Jerry. And bye everybody.
