Tony Thai and Ashley Carlisle of HyperDraft, return to The Geek in Review podcast to provide an update on the state of generative AI in the legal industry. It has been 6 months since their last appearance, when the AI Hype Cycle was on the rise. We wanted to get them back on the show to see where we are on that hype cycle at the moment.
While hype around tools like ChatGPT has started to level off, Tony and Ashley note there is still a lot of misinformation and unrealistic expectations about what this technology can currently achieve. Over the past few months, HyperDraft has received an influx of requests from law firms and legal departments for education and consulting on how to practically apply AI like large language models. Many organizations feel pressure from management to “do something” with AI, but lack a clear understanding of the concrete problems they aim to solve. This results in a solution in search of a problem situation.
Tony and Ashley provide several key lessons learned regarding limitations of generative AI. It is not a magic bullet or panacea – you still have to put in the work to standardize processes before automating them. The technology excels at research, data extraction and summarization, but struggles to create final, high-quality legal work product. If the issue being addressed is about standardizing processes or topics, then having the ability to create 50 different ways to answer the issue doesn’t create standards, it creates chaos.
Current useful applications center on legal research, brainstorming, administrative tasks – not mission-critical legal analysis. The hype around generative AI could dampen innovation in process automation using robotic process automation and expert systems. Casetext’s acquisition by Thomson Reuters illustrates the present-day limitations of large language models trained primarily on case law.
Looking to the near future, Tony and Ashley predict the AI hype cycle will continue to fizzle out as focus shifts to education and literacy around all forms of AI. More legal tech products will likely combine specialized AI tools with large language models. And law firms may finally move towards flat rate billing models in order to meet client expectations around efficiency gains from AI.
Marlene Gebauer 0:07
Welcome to The Geek in Review, a podcast focused on innovative and creative ideas in the legal profession. I’m Marlene Gebauer,
Greg Lambert 0:14
And I’m Greg Lambert and I’m fresh back off the plane 230 In the morning arrival time back from Boston. Yeah, it was, it was terrible. I know more about the because I had a layover in Baltimore. So I know a lot about the Baltimore airport now more than anyone should really know. And should know. Yeah. So but it was great conference. 1800 of my closest friends
Marlene Gebauer 0:38
showed up. They literally they are Yeah, teen hundreds of his closest
Greg Lambert 0:41
Yeah, my, my wife and my sister in law always tag along with me. And so they noticed that I can go about seven to 12 feet at a time before getting stopped by the next person to catch up. And so they they’ve just decided to stop even being around me, and they just go out and do their own thing. So which is which is probably best for everybody.
Marlene Gebauer 1:01
And I should let everybody know that we’re actually recording from The Ion today in downtown Houston. The Ion is a innovation hub area for a lot of startups and other innovators. So we’ve been here before. And in fact, we’ve actually interviewed Joey Sanchez, who basically runs the place. So we’re happy to be back here again.
Greg Lambert 1:26
Yeah, our original plan of drinking beer and podcasting fell through Greg’s plan, so we decide never my plan. So he’s always my plan. Here beer, you know, it’s me. So
Marlene Gebauer 1:39
we are also very happy to have Ashley Carlisle, CMO at HyperDraft. And Tony, Thai founder and CEO and chief engineer of HyperDraft. Back on the show, Ashley and Tony, welcome back to The Geek in Review.
Ashley Carlisle 1:52
Thanks for having us. Happy to be here.
Tony Thai 1:54
And by the way, Greg, I’ll be joining you on the on the drinking side.
Greg Lambert 2:00
So back in January, we talked with both of you about the hype around generative AI, and we even titled The episode ChatGPT. If it sounds too good to be true. So we’ve had six months since then, and Marlene and I have recorded more than 25 episodes of The Geek in Review since then. And most of those on the hype of generative AI. So we kind of kind of made this arrangement when we talk to you initially that we wanted to bring you back on. And so let’s figure out where we are in the discussion.
Marlene Gebauer 2:35
And you know, before we do get into that discussion on generative AI, what’s been going on at HyperDraft? Since we last talked in January, I know you’ve been busy, and we’ve been trying to get you both on the pod for about two months now. So I’m glad we were able to arrange it.
Greg Lambert 2:49
I hear not all of that was HyperDraft. related.
Marlene Gebauer 2:52
That’s true. Some of it was some of it was personal stuff. So that’s good.
Greg Lambert 2:55
Ashley anything new in your life?
Marlene Gebauer 2:58
Yeah, you want to share with our listeners?
Ashley Carlisle 2:59
Well, thank you for your patience. And Tony can go into this in more detail. Because this is you know, HyperDraft is our whole team’s baby. But really, you know, his original baby since what 2018 2017. It has been a busy few months for us, we really are scaling because our customer base has grown, which we’re very appreciative of. But, you know, just trying to make sure because we do customization for every client that everyone’s getting kind of the level of service that will be like has been a lot. So thank you for your patience as we gotta go through our pipeline that way. But yes, I went on my honeymoon. So happy to be back.
Marlene Gebauer 3:35
Ashley Carlisle 3:36
Yes. Thank you so much. And just as a brief intro as to who Tony and I are because I know Tony is probably not going to go into that. We are both former corporate attorneys. We met at Goodwin, Proctor and HyperDraft. We are a trusted legal tech partner that works for law firms, legal departments providing AI powered document and workflow generation. So we get a lot of questions about generative AI. And, you know, we got a lot of questions and comments after being on your podcast in January, too, because I think we’re of the mindset that you should be honest in the hype cycle. When it comes to legal tech, you know, I think there’s a lot of people in legal tech that are just like we have to evangelize everything and only say the positive because we don’t want lawyers to, you know, not be interested or freak out. But I think Tony and I are of the belief that if we’re not honest, then people don’t actually get the tech literacy that they need to make informed choices and actually have digital transformation within their organizations. So yeah, to answer your question, I think everyone’s burnt out on the AI chat to some extent. We’re true. I think we’re at the end of the end of the hype cycle, but it has been an interesting one. And Tony I don’t know if you want to jump in there. I know you have a lot of thoughts with John,
Greg Lambert 4:49
you don’t ask me has been very busy, but what about you?
Tony Thai 4:53
I don’t do anything. Yeah, I do know now from personal experience, everything grinds to a halt when Ashley’s not around. I would say that the last few months, especially after we had our initial chat in January, a lot of clients come to us and ask for more consultative services around learning about AI, how they can apply it in their business. And yeah, I would say that that’s taken up a good chunk of time that we didn’t anticipate kind of going into this year and coming out from last year. And I would say the other part of like, what’s kept us busy is, you know, as you know, the legal tech sales cycle takes quite a long time. And so all the seeds that we planted over the last few years have started to sprung. And so we’re starting to onboard just a great number of clients nowadays, as opposed to just planting seeds. My opinion on the hype cycle, hmm. Yeah, we, I mean, you know, I don’t think it’s quite died out yet, I still see a lot of posts about it, I still see a lot of activity, especially as a nerdy engineer on GitHub, which is what we use to kind of figure out what are popular projects, still a lot of AI talk, but why I think has started to happen a little bit more of, and I’d be curious, in your opinion, what you’re seeing in the market as well is that the general population has kind of been saturated with at least a baseline level of knowledge. And so now that they kind of know what it is now they’re thinking, okay, great, there’s a solution, where’s the problem that we need to fit this into? So I think that that’s why it’s been a little bit more quiet, in terms of the hype that’s been coming out is is just like, consumers, the buy side of the equation is figuring out, is this an actual solution for a problem that we have? What was the problem in the first place. And that’s what we spent been spending a lot of our time talking to folks about it
Ashley Carlisle 6:56
To add on to that point, I think, though, another thing we didn’t realize was, which makes sense. Hype cycles also can lead to just misinformation bubbles. And so a lot of times people come to us with questions, and they think they know what they’re asking, or they think they have kind of, you know, the technology they want for a certain use case. And we’ve been surprised at how much like education people need, and they want and so I think that’s gonna be a big thing in the next couple of years is I don’t know if it’s companies popping up, or you know, us doing more consultative stuff, and other companies doing that, too. But um, you know, we’re hoping that that misinformation can be corrected, because it helps us. And then Tony gets a lot of weird questions that him and the engineers just like, are like, wait, what? Where did you get this? Right?
Tony Thai 7:38
It’s equivalent to like showing up to your doctor and saying, Hey, I Googled what’s wrong with me? And I think that this is the answer. And that is useful in some respect, but not when you’re adamant that, you know, I need this or I want to do this, or RGC. Who, yeah, you, Marlene and Greg have met, Sean has said, like, will show up selling a car. And they they’ll ask they he doesn’t fly yet. And her whole point is, your current mode of transportation is walking. We’re offering you a vehicle and you’re asking for flight. So it’s, there’s a, there’s quite a disconnect,
Marlene Gebauer 8:13
You know, between the misinformation and then just sort of? I mean, are you finding that people just aren’t really sure about what they want, just because it’s such a new technology? And then, you know, people just don’t fully understand it. And, you know, are you seeing that, that there’s some that kind of have some definite concrete use cases? And they really know what they want, and they can move forward? Or is it more people are just in exploratory mode and saying, you know, I want this this this thing, and I want to be able to tell my clients that we have this thing, but they don’t really have they haven’t identified as you said, Tony, they haven’t identified the problem that they want to solve with it. Yeah,
Tony Thai 8:53
I’m gonna let Ashley take the first stab at this because I worry that I go into rant mode,
Ashley Carlisle 8:59
Because there’s something it’s a great question, because we’ve gotten both, in fact that this week alone, Tony and I have been conversations with major law firms or, you know, fortune 10, companies will have different perspectives on this. I think the main theme we’re seeing is there, people in law firms or in legal departments are getting a lot of pressure from other people in their organization being like, you need to be on top of this, like, I need to know what’s happening in AI and how we’re going to use it. And people have different responses to that based on what they know. And so some people come to us and they’re like, we know this, blah, blah, and we’re like, well, your assumptions on this are wrong, but we can kind of clean this up and figure this out for you. Or people come to us and they’re like, so I know I need to use this. Please tell me like the business case and the use case and basically every soundbite that I should tell to my board. And both have been very interesting conversations. We’ve had like a litany of each but it is very interesting to see how legal is being pressured to use legal tech or AI which I think a lot of us have vendors are secretly happy as you know, like the adoption side A golfer legal Tech has been slow for, what, 30 years now. So it’s nice to have kind of a tipping point. But it does lead to a lot of awkward conversations.
Greg Lambert 10:09
To start with, where do you think that pressure is coming from?
Ashley Carlisle 10:12
I think it’s coming from the fact and Tony is open about this, when you talk to these people, they’re just now realizing a lot of the management of firms or these big organizations that AI has been used, and maybe their r&d department, their engineering department, their development department, and that there are a lot more business use cases that they realized. And that legal being a cost center should be something that’s also focusing on this.
Tony Thai 10:34
Yeah, let me add, just touch. I think what’s really frustrating in the, in the hype cycle is when, and I’ve said this, like numerous times. Now I said, like, with prospective clients, I always tell them, hey, come up with a wish list of features and problems that you want to solve for, send them out to all the prospective vendors. And then any vendor that says they can solve all of them, you should remove them from your list, because that’s just not the way that it works, right? Like anyone that’s promises you everything under the sun isn’t going to deliver on it. But the most important thing that I think is hard for many organizations to hear is you must put in the work to get something out of it, right? Like there is no magic bullet AI does not solve, right AI does not solve for the fact that your team is disorganized, or slow to move, or slow to standardize, right. And then when we talk about generative AI, this is where I get really ranty is, you know, you hear about all this, like, Hey, let me generate these these provisions, these documents, all this custom generation, especially randomly seated generation, that’s, you know, what this a lot of this generative AI does, is the antithesis of standardization, right? If I generate the same legal concepts, and and trying to, you know, explain it via some contractual provision 50 different ways, I’ve not standardized crap. And so what I spent a lot of time talking to clients saying, like, hey, it’s good to like develop a system, do the gap analysis, and then figure out how AI can be applied to the process. But we got to be thinking in a process oriented fashion. And in a view of standardizing the process, as opposed to saying, like, Hey, I got a really cool toy, I want to use it everywhere. We see it all the time, too. When it comes to engineers, when I teach a junior engineer and new pattern of writing software, you see them write it everywhere, even if it doesn’t work that way. And the same thing applies with lawyers, like you teach a lawyer, a neat provision or a way to write something you see it plastered everywhere. That’s just the nature of you know, new technology or new learnings. But at the end of the day, what needs to happen is people need to be willing to do the work. And what I’m glad to see with AI is it’s opening people’s eyes and in their willingness to put in the work so that they can get the output that they desire.
Marlene Gebauer 13:00
I liked the point that you raised about, you know, if a vendor says, Okay, we’re going to satisfy all of your pain points that you should throw them out right away. Are you seeing that that firms and organizations are starting to recognize that perhaps it’s not going to be like a one size fits all type of solution, not gonna buy like one thing or apply one thing in terms of generative AI? That, you know, you may have a lot of different points of AI, whether that be through trusted vendors, whether that be, you know, a model that you train yourself, whether that be something that’s commercial that you purchase? Are you seeing any of that? And what are your thoughts about that?
Tony Thai 13:43
I think the bulk of firms are not even remotely they’re in that thinking, right? It’s so nascent in terms of the build versus buy versus acquire calculations that then they need to do they’re not even thinking that way. I got off the song last week with a, I would say top 10 law firm that told me that they’re going to wait for the perfect solution. That was they were like, exactly right, like wait for the perfect solution, which made me chuckle because I’m like, yeah, good luck, right.
Greg Lambert 14:12
I think that comes out in q3. Right? It was a cute Yeah.
Ashley Carlisle 14:19
Q3 of 2030 Maybe?
Tony Thai 14:20
Yeah, yeah. Oh, so I don’t think folks are quite there yet. I don’t I don’t mean to bash these organizations. It’s the nature of running a large organization, you have to get consensus amongst the team and it’s slower moving. I think that’s largely why kind of segwaying over to Greg’s point on builder spy risk acquire, like Thomson Reuters decided to acquire. Does that surprise us? Not really, in the sense that, you know, Thomson Reuters is a massive organization for them to decide on a dime, pivot, or not pivot but start to develop their own in house AI that’s competitive and as focused as the CaseText teams is, would be, you know, nearly impossible. So I thought it was a good move by them to pull on the expertise and the team that they need to really round out their, their offerings.
Greg Lambert 15:18
Yeah. You know, were you were you guys surprised by this? Or did you think this type of acquisition merger is what was going to happen? And do you think you’ll see more of it in the future?
Ashley Carlisle 15:28
So I had heard rumblings, but I had both reactions. At first, I looked at the price. And I was like, Oh, that’s good for our industry. At first, and then I thought about it more. And I was like, should the price be more? I’m not quite sure. I know, Tony, as an m&a attorney might have some more thoughts on that. But then the other thing that was interesting, you know, Bob Ambrogi, does a great job of covering our industry. And right after the acquisition, he covered an investor call that was done by Tr to its investors explaining the acquisition. And a lot of questions were based on the premise of why CaseText? You know, a lot of companies have like affiliations with open AI, why them? And I was really happy with kind of the answers that the TR team gave kind of showed that they, we assume, but they know what they’re doing. And their answers are basically, with these Large Language Models. It’s all about data and kind of the right guardrails. And a lot of these other tools have been promised, but actually haven’t been built like Harvey AI and a lot of this other stuff that’s kind of been marketed widely. So I thought it was very smart of CaseText, to be the first one to actually build something I know, they’re currently in pilot with most firms before this news, but still to just have something because I think that’s another thing we talk to law firms and legal departments about is they just assume the stuff is already built. And a lot of times a lot of this marketing and press is about things that are in beta, or things that they can just merely sign up for. And so I thought it was very smart the way the CaseText team did it. And I know you guys have talked with, you know, some of their leaders. And I think we’ve always valued them as someone who’s not afraid to do things before other people do and someone who really innovates in the space. So I think I’m curious to see how this acquisition impacts that part of their identity, because being a part of the TR family that might be a little bit frictional to them kind of being the first mover in legal research. And then the last thing, which is something we talk a lot about with our potential clients, so people have questions is, this kind of shows the limited use case of generative AI as it stands today, you know, CaseText is known for legal research. And with CoCounsel. From what I’ve seen, it is legal research, data extraction, some summarization, and people are often surprised when Tony or other members of our team have to explain to them that that’s really the best use cases for them at this point, because people think it can do everything. And Tony can go into this. But based on how the model is based, generative AI is not great for all legal work. So I really think there’s so many lessons from how case Texas navigated this.
Marlene Gebauer 17:59
If you get a you know, transactional attorney who’s looking at that, and they’re like, Well, no, that that’s not important to me. It’s like, I want something that drafts and it’s like, yes, but this has been trained on train to do legal research. It’s been trained on case law. So you know, that’s what this this does right now.
Greg Lambert 18:18
Well, speaking of that, call the TR did announcing the the acquisition, or there was one kind of theme that that they were saying throughout it, and I wanted to kind of get your opinion on this. It was that one of the reasons that they really wanted to, or they found CaseText more attractive than the other ones that were out there was CaseText had this long standing relationship with open AI, it was one of the first to get access to GPT. Four, and start playing around with it. But do you think, you know that as we as we see other Large Language Models, whether it be Anthropic or Google bar or llama, or any of these that come out? Do you think that that that advantage will will quickly shrink?
Tony Thai 19:07
I think could be first mover advantage is useful for learning about how to build guardrails and the training and the moderation piece of it. But yeah, I mean, depends on your definition of wait a great lawyer answer depends on what your is, is, it depends on your definition of what like, the competitive advantage is right? Like what does it mean for an LLM to be better than another model? Right? Like, who’s in whose opinion is it that one model is quote unquote, performing better than the next one, right? Are we doing it exams are like, what, what is that standard? You know, from a statistical perspective, yes, we can kind of observe that and compare the numbers, but I think that was With the release of like the llama two models and, and more open source projects, I think that that competitive mode definitely gets smaller and smaller. I think that the advantages from being a first mover and had the opportunity to spend more time with clients and customers earning their trust and learning their use cases that that’s in my opinion that the the actual competitive moat, not just the partnership, I don’t think that means much anything, what I’d say on the competitive moat side, in the way that we are trying to gauge what is a good model or what what it means to be good AI, for usage is evaluating this last mile concept that I talked to a lot of our customers about, which is AI is really good, except for the last mile last mile still needs to be managed by a human being who has judgment and the ability to to have intent and judgment that that’s, you know, made evident from the fact that, you know, you see those news articles about lawyers who are relying solely on ChatGPT to source cases. And then throwing them into memos. And in these are not, you know, Junior attorneys. They’re seasoned attorneys that should know better to do that kind of last mile. Double check.
Greg Lambert 21:14
Yeah, whose name now is a verb. Yeah. Yes. Schwartz up a brief?
Tony Thai 21:23
Still, I feel bad for anyone that last name of Schwartz. I haven’t realized that was the thing. But that’s pretty funny. But yeah, I would say that last mile is probably the point of conversation that I bring up a lot with, with clients now is for them to kind of take a step back and think like, oh, yeah, what is that final mile? When I say that something’s approved, right? Whether or not I’m reviewing a draft of a memo, or a brief that needs to go out? Should I be the last person or who should be the last person that makes that decision? That’s kind of an interesting point, that we’ve realized that a lot of folks haven’t thought about when it comes to AI. And then that actually lets them work backwards into figuring out what the good use cases are for AI.
Marlene Gebauer 22:04
I had another question, sort of, pertaining to the hype cycle. So you know, initially, everyone’s like, yeah, yeah, gotta have it. Gotta have it. Gotta have it, like, clients wants you to have it. And you know, now we’re starting to see clients saying, Well, wait a minute, I don’t want you to use an NDA. I don’t want you using that on my stuff at all. And and so now we’ve kind of gone the opposite way. And, you know, I’m thinking, you know, do we have a situation where we’re going to have to have a bunch of mini models that train on very discrete amounts of data? And our clients are our security folks, are they ever going to be comfortable with, you know, whatever, guardrails are put up? And then if not, these are a lot of questions I’m giving you. And then if not, how are we? How are we going to take advantage of this? Technology?
Tony Thai 22:58
Great questions. Marlene, I wish. I don’t know what your availability is like. But I wish you could just join us on most of these calls, because then you could ask them questions for clients, and then that would streamline the conversations a little bit. Yeah, no, those are those are really good questions. I’ll tack them one at a time. Ashley, obviously, jump in, whenever. So let’s talk about like the ethics and confidentiality concerns. Many models, yes, like, you’re starting to see it with the gatekeeping that’s being done, or the gates that are coming up with a lot of the data sources, right, Reddit, Twitter, you know, name, the aggregation source. They’re blocking open AI in a lot of these scrapers from being able to scrape that data to feed into the models. The next natural progression of that is each one of those companies are going to offer their own model, right, and build their own model so that they can they can make use of their content. So yes, on the kind of boom of Mini models, I think that is definitely going to be thing. But by the very nature of machine learning, like you need massive data sets, train these models. And when your clients tell you, Hey, you’re not allowed to mix my data with other people’s data, then it becomes a tunnel vision issue. Because if you ask a model, Hey, have you seen this type of provision? Or you seen this type of case or fact pattern before? And it tells you definitively? Nope. Never seen it? You don’t know if the answer is because they’ve been segmented from this other client data, or that’s truly market information. Right. My favorite specter working at major law firms is, I think from a client perspective, everybody thinks that there’s some simplistic needed system that tells us transactional attorneys, what a market provision looks like. But the answer to that, and the dirty little secret is, and usually an email goes out to the entire firm wide saying, Hey, I’m looking for this provision. Can anybody tell me what do they think market is, which is wild right? This day and age, we haven’t even figured that out. So that’s the reason why I bring that up is I think the models are not going to be that much better and figuring out whether or not something is market or not, with these kind of Mini models go through. And then just to segue into your other question of like, well, then how the heck do we make use of AI? If it’s not perfect? I, that’s the stuff that Ashley and I deal with on a day to day basis worse, I think because we always had AI as a back office tool for us, we know how to interact with it. But we’ve never made it like a major premise of any service offering. Because at the end of the day, nobody cares how the sausage is made, they just want the result. And so I think that, that shifts a lot of people’s thinking into, okay, maybe I can’t use machine learning here. But I know it can be automated, how do we do that? That starts a conversation with talking to engineers realizing, Oh, I could do this, I just don’t get to do it, the machine learning way, I just have to spend the time and the effort. I keep saying this, right, like fundamental laws of physics applied, cannot get something for nothing. And if you put in the work, then you can get an output that desire. So if there’s something that you feel should and could be automated, whether through AI or not, now is the time to be able to ask that question and then talk to the engineers and say, Hey, can we do this with AI? And if not, AI, on this machine learning Large Language Models specifically? Can we do it some other way?
Greg Lambert 26:31
Yeah, I’m just curious, because I know, we’ve been using AI, on the back end on a lot of products, probably for the last seven years or so maybe a little bit longer. But really, you know, full force the last seven years or so, does this hype around generative AI, so this slice of AI? Does that kind of cool down the market? For a number of advancements on the back end AI that may not may be more extractive than then generative. I’m just curious as as a vendor was, you’re developing products, you may have a solid source that you’re using to help kind of tweak the back end to automate it to use the AI in that way. And then all of a sudden, in November, the switch gets flipped, and everyone’s talking about Gen AI, which may or may not help what you’re doing at all. So sorry, I was kind of a round roundabout question I wrote, hopefully, you get get the gist of it.
Tony Thai 27:34
No, I got it. I think it takes a lot of discipline. And it’s something that like Ashley and I and the rest of the team talk about a lot is to stay disciplined and focused on the end product. It’s hard not to jump on the hype train. And you have Ashley’s actually stated this more more concretely than I have like, with the vendors that have jumped on it so strongly out the gate, like it’ll not look so great next year, or the year after that. And so it’s really about staying focused on the end product. And as we mentioned, like the end product for us. And what we work on with a lot of our clients is that consultative approach of saying like, Hey, what is what is the problem, you think that you have that we’re going to solve, like, don’t worry about the tools, the tools, we as the vendor, and as the engineers, we’re going to worry about that. But the fact that somebody shows up and says, like, Hey, here’s a wrench, paint my walls, it’s not going to help us, right. Like, if the problem is we need to paint the walls, we will deliver that solution for you. So a lot of it has to do with staying focused, and then No, educating the client as Ashley as mentioned, in terms of like, how it’s and I think this is part of your question, tell me if it’s not, but in terms of how like generative AI has affected AI research in general, it’s only for the positive, which is fantastic. I would say the glimmer of, of hope and optimism that I pulled from this is is a sense that it’s gotten people to read more and learn more about what’s possible. what worried me and Ashley when we had this conversation in January is that people made it overblown and then over relied on it, which has already started to happen, right? But the education and the focus on AI like that’s fantastic that leads younger generations and kids to want to learn about this stuff. If being able to generate a picture from just prompts inspire someone to become an artist or to look into programming as a way of of expressing themselves artistically. That’s fantastic, right like I’m I’m excited about that portion. Do I think it there are people in the academic research side that are not so happy with it? Absolutely. Do because there’s plenty of people that haven’t talked about joining hub AI that much because they’re like, dude, I’m busy, you know, and developing AI models that help us All actual problems like figure out different drug dosages for patients, which you know, some of my friends are working on, as opposed to have five generate a picture of a weenie dog holding a hot dog on a boat somewhere. Right? You know, when we compare those those two problem cases, one seems ridiculous, and the other one seems worthwhile. So hopefully we’ll steer towards the more worthwhile ventures, as the hype cycle dies and whatnot.
Greg Lambert 30:28
Now I feel bad for that country music song they had to create for me. So.
Ashley Carlisle 30:35
I mean, that’s for our entertainment. Right, Greg? Sorry. That’s all good. I think one thing that I think I’ve joked with Tony about but every day, I just think it’s true. To echo with what he said, for our team, we kind of view Gen AI as the gateway drug to the AI stuff that we’ve tried to talk to lawyers about for years, and they’ve just gotten bored. So that’s good and bad, you know, open AI, and I think I mentioned this in January, but just my thoughts on has kind of been cemented, they picked the marketing tenants for AI and execute it really well, like the tenant of marketing of show not tell that is open AI and ChatGPT in a nutshell. And so that’s why it popped off. The good. And the bad of that is the good that people want to learn more about Gen AI ai, they realize the limitations, they go over to the more AI side, like Tony said, hopefully we can find a practical solution, maybe not what they originally thought. But also it kind of makes this weird juxtaposition because as someone that’s watched Tony, obviously I’m not an engineer, but we’ve had conversations about how he’s built our product over the years and how he’s added on new things. From my understanding as non engineer, the whole process of building ml models and adding these AI features is very unsexy. So the juxtaposition of ChatGPT coming in and being like a marketing, darling and being very like flashy to the reality of everyday AI that people have been using in the background for years. For some people, it’s a very weird conundrum. So in some ways, it’s great. But in other ways, it kind of makes it awkward, because Tony will tell you, ML and AI, he did it before he even went to law school. It’s a very unsexy like thing that requires a lot of patients.
Tony Thai 32:07
Marlene Gebauer 32:08
I had a question about your thoughts regarding how Gen AI is sort of impacting kind of other technologies that we’ve seen in the industry like expert systems or RPA, or blockchain?
Tony Thai 32:25
Yeah, I would say it’s, it has a negative approach. And the reason why I say this is, like with rpa, robotic process automation, if you know where you want to go, let’s say we’re driving a car, we’re trying to get from point A to point B. And the solution set is, okay, I need to go to the grocery store, what streets do I need to do? And you know, what do I What steps do I need to do to get to that grocery store, you have to learn it, you have to figure it out, you have to kind of reverse engineer and then you write a list of instructions for a person to go do it now with with, you know, GPT, and a lot of these, you know, Gen AI ai models are LLM that are focused on collecting all this data and then making a statistical probabilistic estimate as to what the steps should be. To get from point A to point B. It’s just war, wasted energy and more wasted time. Rather than just reading a damn map and saying, let me just go from point A to point B, and why risk doing the wrong thing. And when I bring this up it you know, a lot of people are gonna You’re just a hater. No, not I’m just I would love to use it. The problem with it is, let’s say we have an API, right? Let’s see, we have a banking API. And if I’m going to use ChatGPT, and its plugins to transfer money, which has a nonzero chance of sending it to the wrong bank account, why the hell would I do that? Right? Why not just read the documentation, know where I want to go and then write the instructions down. It takes less time, less effort. As society, we’re just so incredibly lazy, that we want to be able to just say, hey, I’ll read my mind. We’re incredibly lazy. I mean, the other thing, it’s the same thing applies when I talk about like, oh, Tony, can you make auto suggestions for me to respond on a on a contract provision? Absolutely. I can make recommendations, it will never match your style perfectly. It will never match your intent. It cannot know your intent. And if you’re going to write the damn intent down, then write the damn contract language down that did this. That’s the most frustrating part is saying like, yeah, I want this. Just write it down. Just write it down. Right. It’s it’s better than asking somebody else that’s me. Like, like me saying, like, Hey, Ashley, can you write this stuff? No, just do it myself. Anyways, that’s Rant over. Just so frustrating. Well,
Greg Lambert 34:49
Tony, I’m not really I’m not sure I’m understanding your passion.
Ashley Carlisle 34:55
Well, I think Tony brings up a good point which we see time and time again that I don’t know if vendors really tell Talk about that you should. Obviously Tony and I are former lawyers, we all know as former lawyers are working with lawyers that a lot of times lawyers do things to do things, you know, they don’t think about working smarter, there’s just like a way to do it. And unfortunately, when that comes to try to figure out what to automate, that’s the mindset they bring into that conversation. And some vendors are smart enough to check them and be like, Okay, we need to like completely reset your mindset to make this productive. And then others just want to close the deal. And then people are unhappy with the result, and you have to fix it afterwards. And that’s something that we have all the time people come to us and they’re like, Well, why wasn’t this implemented correctly? And we’re like, well, hold up, that’s another vendor. And then also, like, did you have these conversations? Like, were you just thinking like, we have to do this, because we’ve now committed and so we’re going to automate everything? Did you actually like talk amongst your team gathered data, like, it’s a very interesting thing, which I get where it comes from. But I think a lot of people need to be more candid in the process. And we’re honest about it,
Tony Thai 35:54
They’re not going to be Ashley because like, there’s it like a perverse incentive to oversell and under deliver. That’s why every time we show up to these meetings, we’re the third or fourth vendor that these folks have gone through, and they’ve been like, beaten down, it is useless. And this last week, I said, I said to people, like, we’re never the first boyfriend, right? Like, we’re always the third or fourth relationship. And there’s all this baggage and hurt and pain. And we have to sit there and talk. And I, you know, we’re more therapists than we are like tech solutions providers, we have to be realistic and cognizant that a lot of folks will just lie, right? And it’s hard to say, you know, no, we don’t want your business because that’s not possible within the budget that you want. Right. And that’s a tough decision to make. And that’s one of the benefits of not being venture backed. And the fact that we’ve built this business from scratch and built it into a profitable business is that we are not reliant on the numbers on how many clients we have, it’s how much money and profitable partnerships can we build, as opposed to just pure numbers play of trying to get every client and logo that they can. I like talking to you, Marlene and Greg about this, because like, you guys have a realistic view, being on the inside and seeing how bad it is when like these vendors come out. And all they want to do is, is sell you the moon. But by the time you get there a year and a half later, and political capital spent aside, you’re left with nothing, and just complaints and turn the tables.
Ashley Carlisle 37:28
I’d be curious to know your thoughts on that lawyer laziness, or just dealing with vendors and trying to figure out whose candidate not
Greg Lambert 37:35
I think, you know, the proof is in the pudding, so to speak, and that when you get a product, you actually get a product in front of you that you get to test. Wow, you really see see it warts and all on there. And you know, things like taking, you know, two to five minutes to answer one question, and then not being able to answer a task of follow up, that really affects the view, you know, kind of burst the bubble. So I, you know, I think one of the things that we’re seeing is a lot of over promising and under delivering, and that it won’t cause another AI winter, because I think, you know, this is this is one, the first time that this is really real from the consumer side side of things. But, you know, if if you keep over promising and under delivering that has a long term effect for the for the next next boyfriend that comes along.
Marlene Gebauer 38:35
And I’ll say that it’s kind of been a mix. And in my experience, it’s like, you know, I’ve seen that there’s a lot of interest from from attorneys and asking a lot of good questions being very thoughtful. But as we talked about earlier, when it kind of comes down to okay, we need to decide on, what do we want it to do. And you have, again, you have a lot of different people with a lot of different opinions. And, you know, that’s, you know, that’s part of the decision making process, but it but it is kind of messy. And in terms of vendors, you know, again, a mix, I’ve I’ve seen some vendors that are, you know, very candid about things, and then I’ve seen some others where, you know, you do get the sense that, you know, you’re not getting a clear answer. And that is indeed frustrating.
Tony Thai 39:27
It must be hard for both of you, because you’re fielding these questions. And then there’s, you know, other information online, that, you know, this goes back to the whole like self diagnosing thing when you go to your doctor, like they’re saying like, Oh, well, you know, you can do this, like, why don’t have access to it. And then you have to sit there and say like, well, one, our clients will let us just feed it a bunch of their data. That’s one right to it’s not perfect, right. And I know in a perfect world, you’d have something that can automate the tedious parts of your job. But some of it is required. I don’t know if we’ve already talked about it on this podcast. But in conversations, I’ve had this discussion with other engineers before I said, like, I think our approach at adopting AI is very biased, because we’ve done the work before. Right? Everybody here knows, the thought process that needs to go into determining whether or not a contract provision is valid, or whether or not a specific case citation makes sense. The younger generations don’t have those neural pathways. So where do they start? And how do they determine whether or not an output is good or bad? They still need to do the work to build those muscles. And those neural pathways, and that’s what worries me the most is like, yes, there, you know, don’t throw the baby out with the bathwater, right? Like, yes, there is a lot of tedious stuff that we’ve had to do. But some of it is part of the training, like some of it is like I read the case, right? I didn’t wait for the summer, I read the entire thing. And then I figured out how to interpret it. And that’s what worries me about folks that try to oversell the AI is I think that it’s a disservice to a good chunk of the younger generations, because they don’t know how to how to do the actual work before they automate it.
Marlene Gebauer 41:18
It’s like, it’s like I tell my kids, like, you know, it’s okay to be bored sometimes, because, you know, that basically leads to, to creativity and coming up with good solutions and learning stuff. So we’re talking about, like, all the things, you know, we’re being real critical on on Gen AI, but, you know, let’s, let’s turn it around and be like, you know, what, is it actually useful? You know, I think we, Tony, you might have hinted sort of at an assistant type of role, you know, is that one, or is there some other ones that you or and Ashley can think of, for me,
Tony Thai 41:51
I think of it in the stuff that we build, right our team builds is like second brain activity, which is, you know, stuff that I suck at remembering, or that I hate having to do that feels tedious, for example, like, searching through my email, or having to set up reminders to send additional emails, I’m sure everyone here has used Gmail and went reminds me, Hey, you didn’t respond to this email three days ago, like, those are the types of AI features that I find super helpful. And then on the more on the contract drafting side, it never gets it right. But it gives me inspiration in terms of like how I should approach a provision, or it gives me a list of things to think about, sort of like a checklist, so that those are good use cases, for me in terms of the Gen AI ai applications in the legal industry.
Ashley Carlisle 42:41
To add on to that because I get to be privy to a lot of these conversations that you have with our clients or potential clients. You know, obviously, the CaseText example, I explained how like research, data extraction, it can, it can do those things, brainstorming admin tasks. I was on a panel on Monday, where we went through and people were saying, um, how they’re using it kind of in a law firm in business contexts, or in brainstorming, not really legal work product. And I think that trend will continue, Tony, I don’t know if he would mind getting into this. But one thing that I’ve found helpful, Tony explaining to our team and potential clients is, it’s not the gin AI is bad or not useful. But you have to know how the model is set up just like on a second grade level, so that you can understand where it is limited, especially in regards to making legal work product is good and research because it’s a statistical model, which Tony can go into. But in regards to creating consistent documentation, which as you know, lawyers try to act like we invent a new cool thing every day. Really, it is like quality control and consistency is what keeps us at the top of our job. That is not what it’s built for. So I think that is something that is a common misconception we have to work through with people is if you want new ideas, or you know not really creative, but as creative as Genie AI can be ideas. Sure. But if you’re looking for quality control, this is not that because it is I don’t know Tony statistical model. I don’t know how you would sum it up but I think people are very surprised by that.
Tony Thai 44:09
The Ion it perfectly athlete, you know, what I would say is Marlena, you know, bring it back to your kids not to single them out. But I’m sure that there are situations where you ask them to do something multiple times, right, like hey, you know,
Marlene Gebauer 44:22
Do you live in my house> I’m sorry.
Tony Thai 44:26
Or you’re like, hey, take out the trash and they took out the trash or there’s a bag of trash sitting next to the trash cans. They didn’t take that out either. You’re like no don’t..,
Marlene Gebauer 44:34
He really does live in my house. Exactly. Right.
Tony Thai 44:37
Exactly. Right. I’ve got siblings, and then you go out to the trash and you’re like they took out the trash and put it next to the trash can not inside the trash can. And then at some point you’re like, it would have been faster had I just done it myself. And so the goal of talking to your kids about it is like the goal is eventually they will do it themselves. Right like and I think that’s ultimately a good goal for Gen VI is, if you give it enough data, then it’ll figure out how to do it.
Marlene Gebauer 45:04
Five years from now, it’ll get it right.
Tony Thai 45:09
You know, at some point, you’re just like, I’ll just take out the trash myself, right?
Marlene Gebauer 45:13
Like, when it doesn’t benefit me anymore. They’ll figure it out, then, you know,
Tony Thai 45:16
Exactly. So there is that cost analysis that gets done a lot. And when we talk to clients, or many clients, we’re like, we don’t, you know, we’re glad that you don’t focus your sales pitch on AI, because like, we won’t take a product that that sells us AI, because we’ve gone through all the iterations. And we’re sick of it, we know exactly what we want, we want you build it. And then we want you to tell us what it can or can’t do in the future. But we don’t care about the training of how to take out the trash like we needed to take out the trash today.
Greg Lambert 45:48
Ashley, before we get to our crystal ball question I want you to put back on your cmo hat because I’ve talked with a number of CMOs and law firms that really love some of the some of the summarization some of the idea generation, has there been any specific examples where you’ve used it with your cmo hat on?
Ashley Carlisle 46:09
Yeah, I think it’s very interesting. And I think in legal marketing circles, people are trying to figure out new use cases all the time, obviously, for content for brainstorming, it’s good. I think Google has gone back and forth on how it SEO ranks automated versus on automated content and how that flags. So I think a lot of people are staying tuned to make sure they’re up to date on that. Because the last thing you want to do is, you know, get dependent on ChatGPT to create a lot of content, and then it not actually show up in the rankings. For now. It’s fine. But I think that’s has been a big discussion point. But I will say it’s not as good as if a lawyer was writing it, or a legal marketing professional was writing it. So it really depends on kind of what your end goal is. But when you have a large volume of stuff to do, it is helpful, you know, so I think we do use it for brainstorming and for content. But like Tony said, I’ve had to really work with the prompts to make sure that I’m giving it enough info, because if you do not give it enough info, it is the most vanilla diluted, airhead content you could find on the internet. It is it is.
Marlene Gebauer 47:09
It’s general, it’s just it’s very generic, you know, so it’s a good base. But I often wonder if sometimes having that base is a bit of a crutch. And then it’s like, okay, I don’t actually get as creative as I would have been if I had done it myself. Alright, so crystal ball question now. So you guys know the drill, we ask our guests to look, you know, two to five years into the future. And you know, tell us what you think you see in terms of generative AI ai,
Greg Lambert 47:40
But since we may have you back on in six months? Let’s just shorten that for you a little bit.
Tony Thai 47:46
Ashley, want take a first stab?
Ashley Carlisle 47:47
Sure. I’ll just go kind of, obviously, its predictions. So it’s fun. We don’t know exactly, I guess we’ll find out. But I think for our team, kind of what we’re predicting, which I kind of alluded to was the hype cycle dies down. And people are just focusing on tech literacy and knowing what AI is. And so I guess our hope is that that just sustains. Also, I think like you said, we’re going to be in another chapter of the buy build partner phenomenon with LegalTech, which is kind of plagued our industry for a long time. But this time with many models or coupling kind of these large LLM with specialized legal products with AI. It’s gonna be a whole new phase for our industry. And I’m just curious to see how that pans out. Because as you guys probably very much know, a lot of lawyers think like, oh, we can build that we can hire someone to build that. And then they get bored. And then they come to us. And then Tony at our team has to clean up their mess. So I think unfortunately, for better for worse, we might have that again, just with different AI problems.
Greg Lambert 48:45
And Tony, which What’s your prediction?
Tony Thai 48:47
That I mean, I hate to I hate to jump on that bandwagon, but I think that’s exact ly it.
Ashley Carlisle 48:53
Also, you hate to say that I’m right? So
Greg Lambert 48:56
I don’t know, he said, You’re right. A number of times.
Ashley Carlisle 49:00
I know. And I have this on record?
Greg Lambert 49:03
We’ll just we’ll do a loop on it. Yeah. All the compliments, we’ll just put it on a loop and send it to you,
Ashley Carlisle 49:09
Greg and Marlene , do people ever ask you what you think? I mean, obviously, I know, you have a lot of these conversations, and they very different parts of you know, the industry. But in regards to AI, what do you think?
Marlene Gebauer 49:19
We debate stuff all the time, you know, just just internally and, you know, the attorneys come and they they you know, they want to know and they want they want feedback from you know, the innovation group is to Okay, well, what should we be doing? What do you recommend? So, yeah, there’s a lot of you know, there’s there’s a lot of that type of conversation going on, I think.
Greg Lambert 49:39
Yeah. And I would say probably one of the things that I’m, I’m hoping will happen over the next six months to a year is that even if this isn’t the answer to their question, that because of the hype going on around here, they’re going to ask questions, they you know, they’re going to come to you and say Say, Hey, I need I need, you know, ChatGPT because that’s what everybody calls it. I need ChatGPT to, you know, to do X, Y, and Z, because, you know, I’m having, I’m having trouble, I need to, I need to make that more efficient. And then as we get into is discussing it, then we realize, well, really what you’re doing, you have a data issue, or you have a process issue, and it gives us a chance then to eat, you know, for not the right people, at least direct them. But you know, one of the benefits that I’ve loved so far is that it’s gotten the conversation going, when it may not have gotten gotten going before.
Marlene Gebauer 50:39
I’m sorry, I had another question. Oh, go ahead. Tony, you were gonna answer? Sorry,
Tony Thai 50:43
I want to hear that question. But I have like a side prediction that will probably put egg on my face, but I’ll say it anyways.
Greg Lambert 50:52
We will, we will bookmark this in the transcript.
Tony Thai 50:56
I predict that with Gen AI and the kind of magnifying glass that goes on legal tech and just efficiency in general, that some major firms are going to move towards a flat rate model in the next 12 to 18 months, which I know we’ve been talking about for over three decades now. But I actually think that that is going to start to have to be a thing. Because the client, the client awareness is just there. Right. That’s, that’s the part of the conversation that that the firm’s have to stay up to, today on?
Marlene Gebauer 51:33
Yeah, that wasn’t my question, although we have talked about that as well, internally that like this might be the thing that pushes the pushes us over the edge. My question was about prompt engineering. And, you know, do you see any opportunities, you know, for this type of role in firms or in other places? And, you know, what types of folks do you think would be, you know, would be sort of moving into those roles?
Tony Thai 52:04
I think it’s a it’s a skill set that gets added to existing legal professionals. I can’t imagine having the need to have a have like a full time person on it. Yeah, I can’t, I can’t do that pathway. But I am seeing a lot of that pop up with even our clients that they have, like open job postings for prompt engineers, to help them with it. But the major question is, like, what’s the long term use case for them? You know, once they’ve engineered these prompts that way, it’s
Greg Lambert 52:36
It’s like being a webmaster in 1994.
Tony Thai 52:40
I love that reference. Heck, yeah.
Greg Lambert 52:43
Well, Tony Thai and Ashley Carlisle from HyperDraft, we want to thank you for coming back on. We want to do this again in January. Always do it. All right.
Marlene Gebauer 52:55
And thanks to all of you, our listeners for taking the time to listen to The Geek in Review podcast. If you enjoy the show, share it with a colleague. We’d love to hear from you. So reach out to us on social media. I can be found at @gebauerm on Twitter,
Greg Lambert 53:08
and I can still be reached on Twitter at glamour, but I’m finding myself more and more young threads on threads. So I’m glad you join Lambert pod. So the same glam group, but with pod on the end, somebody beats you to glamour. Yeah. Well, it’s because it was Instagram. So someone on Instagram beat me to it. And so you gotta take your handle from Instagram. Yep. So well, Tony and Ashley, if anyone wants to learn more about HyperDraft or continue the conversation with you, Tony, where can you be reached?
Ashley Carlisle 53:42
So they can join our waitlist on our website,
Greg Lambert 53:46
This is time I wish it was video. I wish I would have been great. So So okay, let me let me rephrase that. Ashley? Where where can people find out more?
Ashley Carlisle 53:57
So if people want to learn more, talk more about AI or document or workflow automation or really anything legal Tech, we get a lot of random questions happy to answer. They can either join our waitlist on our website www.hyperdraft.ai. They can follow us along on LinkedIn we are HyperDraft in there and on all other social media platforms, or they can just email us I’m firstname.lastname@example.org and Tony similarly as email@example.com
Marlene Gebauer 54:23
And listeners you can also leave us a voicemail on our beacon review Hotline at 713-487-7821. And as always, the music you hear is from Jerry David DeCicca.
Greg Lambert 54:34
Thank you. Thanks, Jerry and Jerry has a new album out.
Marlene Gebauer 54:37
Oh, terrific. Well, we’ll have to talk about that next.
Greg Lambert 54:39
All right. Talk to you guys later.
Marlene Gebauer 54:41