This week on The Geek in Review, we talk with Gregory Mostyn, CEO of Wexler.ai, about how his company is building a sharper form of legal AI for litigation. In a market crowded with broad platforms that aim to handle every legal task at once, Mostyn describes Wexler as a focused system built for one of the hardest problems in disputes, understanding the facts. He shares how the idea grew from watching his father, a judge, carry home stacks of ring binders and spend late nights reviewing case materials by hand. That early picture of legal work, heavy with paper and pressure, became the spark for a company aimed at helping lawyers work through massive records with more depth, speed, and precision.
A central idea in the conversation is Wexler’s view that the most useful unit of analysis in litigation is not the document, but the fact. Mostyn explains that lawyers are often handed a mountain of emails, messages, filings, and exhibits, yet what they need is a clear understanding of what happened, why it matters, and where the pressure points sit. Wexler is designed to pull out events, inconsistencies, and supporting details from that record so litigators are working from a factual map rather than a pile of files. That shift matters because disputes are rarely neat. Important evidence may be tucked inside an offhand message, a late footnote, or an exchange written in vague, coded language. Wexler’s aim is to turn that mess into something a trial team can use to shape strategy.
Mostyn also walks through the mechanics that separate Wexler from more general legal AI products. He describes a detailed fact extraction pipeline that processes unstructured material and turns it into structured data before the system reasons over it. That design helps Wexler deal with the disorder of litigation, where timelines blur, people contradict each other, and key details are easy to miss. He also points to the scale of the platform, noting that it handles large document sets and supports work such as deposition preparation, trial preparation, summary judgment briefing, and early case assessment. One of the more striking features is real-time fact checking during depositions, where the platform helps lawyers spot contradictions in testimony as the questioning unfolds. The effect is less like using a search box and more like working with a tireless junior team member who has read the whole file.
Trust, accuracy, and restraint are another major part of the discussion. Mostyn is careful not to oversell what AI can do. He openly states that no system is perfect, yet he argues that Wexler reduces risk by staying inside the record given to it. It does not search the internet, does not drift into outside material, and ties its outputs back to specific text in the source documents. That discipline is important in litigation, where a made-up citation or invented fact is more than embarrassing, it is dangerous. Mostyn presents Wexler as a tool that helps lawyers verify, question, and sharpen their understanding of the case. The result is less time spent slogging through repetitive review and more time spent thinking about how to use the facts in a meaningful way.
The conversation closes on a bigger question about where this kind of technology leads the profession. Mostyn believes that as AI takes on more of the burden of document review and fact development, the value of human lawyering rises in other areas. Strategy, advocacy, witness preparation, courtroom performance, and judgment all become more important when the groundwork is assembled faster and more thoroughly. He also suggests that clients are beginning to care less about how many hours were spent reviewing documents and more about whether their lawyers are prepared, informed, and effective. For listeners interested in litigation, legal AI, and the next stage of law firm economics, this episode offers a thoughtful look at a company betting that the future belongs to tools built for depth, discipline, and the hard realities of dispute work.
Listen on mobile platforms: Apple Podcasts | Spotify | YouTube | Substack
[Special Thanks to Legal Technology Hub for their sponsoring this episode.]
Email: geekinreviewpodcast@gmail.com
Music: Jerry David DeCicca
Transcript:
Greg Lambert (00:00)
Hey everyone, I’m Greg Lambert with the Geek in Review and I have Stephanie Wilkins from Legal Technology Hub with us. And Stephanie, it’s about that time again to release the GEN.AI map. So what’s it look like now?
Stephanie Wilkins (00:13)
Yeah, it is. It’s come to be that every new quarter at Legal Tech Hub means it’s another update to that LTH-GNI Legal Tech map. And somehow we’re already there for 2026. As with nearly every map iteration that came before it, we’re once again seeing significant quarter over quarter growth in the map. As of March 2026, we’re right around 1,000 product placements on the map, which is a milestone that we’ve been looking forward to since we launched the first map.
We thought we might hit it by the end of 2025, but we weren’t far off the mark with that prediction. But speaking of that first map, if you remember, or if maybe you never saw it the first time around, the first iteration of the map came out in February 2025. We saw 400 product placements for Gen.ai and LegalTech. We updated it just two weeks later, just in time for Legal Week 2025, and we had about 500 product placements. So we’re talking…
One year ahead, Legal Week 2026, we’re at almost 1,000. So we’ve doubled the market in that one year, which is just really impressive. And the count went up significantly even from just three months ago. Our end of the year map for 2025 had about 850 product placements. So we’re still on a steady incline in terms of growth. But where we’re seeing growth has shifted a little bit over time. Initially, we saw a lot of growth in hot areas like AI legal assistance.
And while there still is some growth there, this time around we’re seeing our biggest areas of growth in more bread and butter or less trendy areas, if you will, like law firm operations and compliance. And to me that signals that AI really is starting to make inroads in legal and in areas where it can make the most difference internally and maybe not externally what you want people to think you’re working on. And that’s sort of in line with what I saw at Legal Week also, that the AI…
conversation is maturing a lot and people are having smarter discussions about how we can solve actual problems. And it’s all part of the bigger trend of moving from if to how on AI. And I think that’s a really great thing. I have heard some people start to speculate if our map will start getting smaller eventually rather than bigger. given all the consolidation in the market, that may one day be true, but by all accounts, by everything we’re seeing, that we’re not at that point yet.
So we’re still getting bigger and curious to see where we are at the end of Q2, three months from now. But if you want to see the latest map and our analysis of it, you can just find it on www.legalteknologihub.com. And that article will link to prior iterations of the map as well, so you can compare.
Greg Lambert (02:41)
All right, Stephanie, make a projection. Second quarter, 2027, what’s our number?
Stephanie Wilkins (02:48)
second quarter of 2027? I’ll go 1500.
Greg Lambert (02:50)
All right.
Marlene Gebauer (02:59)
Welcome to The Geek in Review, the podcast focused on innovative and creative ideas in the legal industry. I’m Marlene Gebauer
Greg Lambert (03:06)
And I’m Greg Lambert and Marlene for the past three years or so, the legal tech world has been obsessed with the ⁓ Swiss Army knife of AI, these tools that try to draft every email, summarize every contract and basically try and boil the ocean in one go. we’ve entered the new phase in
Marlene Gebauer (03:26)
One size fits all.
Gregory Mostyn (03:27)
Yeah.
Greg Lambert (03:30)
what’s being called the specialized scalpel where journalists are being pushed aside by these very high precision tools designed for the highest stakes in litigation. And today our guest ⁓ is at the center of this shift. He’s the ⁓ CEO of Wexler, a London based company that pioneered the category of fact intelligence.
Gregory Mostyn (03:50)
you
Greg Lambert (03:51)
and has rapidly become essential infrastructure of some of the world’s most elite law firms, including Clifford Chance, Goodwin, and HSF Kramer.
Marlene Gebauer (04:02)
And Gregory Mostyn comes from a family deep deeply rooted in UK legal establishment. His father’s
a recently retired high court judge and his and his siblings are partners and barristers. know, but he eating. Greg, you took a little bit of a roundabout way into legal tech, including brief tenure as an actor on the HBO baking drama industry. We got to hear about this. Gregory, welcome to the geek and review.
Gregory Mostyn (04:11)
Okay
Good job.
Banking, banking.
Greg Lambert (04:27)
Thanking.
Gregory Mostyn (04:29)
Yeah,
thanks for having me. Yeah, it’s good to be here.
Greg Lambert (04:33)
⁓ So, Greg, we love a good founder’s journey and you’ve mentioned in some of your writings that there’s a childhood memory of how your father was bringing home these huge physical ring binders of data to review every single night. So how did that image of the ⁓ binder era of litigation shape your decision to avoid the generalist AI boom and instead focus on
Gregory Mostyn (04:56)
Yeah.
Greg Lambert (05:01)
this scalpel approach that you’ve been talking about.
Gregory Mostyn (05:05)
Yeah, so yeah, as you correctly identified, I’m not an attorney, but I’m grown up surrounded by them. I remember basically every Sunday lunch, every sort of family dinner, huge debates about whatever was the latest case raging, not just in the UK, but sort of globally, you know, whatever it was, the kind of controversy of the time. And I do vividly remember my dad coming back having to review literally 10 ring binders before the next day.
in court like overnight, you know, staying up till 3am or something and then standing up in court the next day. And my dad was actually a real innovator in the profession. So in COVID, he was very sort of when he was a judge at that point, he was very forward about, you know, conducting hearings on Teams and zoom and all those kind of things. And he actually founded a kind of legal tech company in the 90s, as it would happen, which was a it’s still going it’s called class legal is like a family law company, which started off as a sort of
publisher for some sort of legal textbooks within family law in the UK. And it also helps you to create various templates and forms and things. So kind of like a bit of rules based machine learning. I don’t know. But anyway, so. So yeah, I think that was a very vivid memory of my childhood. And when I started this, I went through an incubator called entrepreneur first, where you’re sort of paired up with other brilliant people, supposedly brilliant. And you and you try and
come up with a real big pain point and there was no pain point I could remember more vividly than my dad having to review hundreds, thousands of pages manually, line by line, word by word, to prepare for court for the next day. So that was where we went after. we, spoke to literally hundreds, starting off in my network, people I knew, know, people, family, friends, et cetera, and growing and going out into the world across the US as well as in the UK and identified that this was a real problem, basically establishing the facts and disputes, you know.
You either hammer the law or you hammer the facts. And actually, as a sitting judge, my dad realized that half the lawyers in front of him didn’t know the facts of their cases. They probably physically couldn’t in a lot of cases because there were too many documents to review manually. And you’re always going for a sort of subset of the data involved. And yeah, I think that was the Prince Harry case versus a big newspaper here where they just had to take a small section of the potential evidence to review because there wasn’t enough time or resource to review the whole case. And I thought, hang on a minute, we could build something or actually
we can review everything and give it the attention it deserves so that every client gets the best representation. And hey, maybe those lawyers don’t have to stay up till 3 a.m. to review it cover to cover. So yeah, that’s been the story since then. We’ve been ruthlessly focused trying to do something more sort of specialized rather than, as you say, try to ball the ocean. But it’s now become this kind of full platform for analyzing, establishing and verifying facts and disputes.
Maybe it’s more like a kind of surgeon’s tray of scalpels rather than just just the one.
Greg Lambert (07:51)
I’m curious,
did you experiment on your siblings and your father to test these things out as you were developing them?
Gregory Mostyn (07:58)
100 % at the beginning my dad was very closely involved. My stepmums also a barrister in the UK. My brother’s a partner at Cleary. So yeah, they were my first proto beta testers, if you like, showing them figma prototypes, understanding how things would work. You know, at the time, like if we think about how far the models have improved at the time, they were still very, I mean, at the time they were incredible. But now looking back, you really had to do a ton, a ton of prompting of stitching together the different systems to get the best output.
And actually we’re now the beneficiaries of the billions that are being poured into the models because, you know, it’s a bit of a modular system that we’ve created. So one new model release improves one part of our product and so on and so on. But yeah, at the time it was kind of mechanical Turk, if you like, but very, very, ⁓ very, very helpful to have that, you know, ex-boy and so on. Yeah, exactly. But yeah. Cool.
Marlene Gebauer (08:40)
you
Greg Lambert (08:40)
Thank you.
Gosh, I haven’t heard mechanical Turk in a while.
Marlene Gebauer (08:50)
So
for our innovation and KM leaders that are listening, we often hear them say, we already have eDiscovery tools for document review. Now, you’re coming at this from a different angle. you’ve argued that eDiscovery prepares documents for humans, but fact intelligence is different. It reads them like a human does. So it’ll extract events from footnotes on page 993 or buried in a WhatsApp thread.
Gregory Mostyn (08:54)
Yeah.
Mm-hmm.
Marlene Gebauer (09:19)
Can you explain why the atomic unit of legal knowledge needs to shift from document to the individual fact and how that changes a firm’s
Gregory Mostyn (09:30)
Yeah, for sure. I think, you know, if you imagine like a hypothetical case, let’s say it’s a multi-party fraud claim, 80,000 relevant documents, you know, multiple plaintiffs, different advisors saying different things, know, there are inconsistencies, one person’s claiming something, someone else is claiming something else, a third party is saying that never happened, someone else has got, you know,
is this another dog in the fight where they’re trying to bring up some other discovery, which is important. It’s not that helpful to say, here’s some relevant documents, right? It’s not that helpful to say, here are the documents that are responsive to certain search terms, because people don’t speak in search terms. People speak encoded, obfuscated language. Things don’t always add up. And so what you need is something that’s able to basically distill the documents into the facts. Not just saying, hey, here’s 10 relevant documents, but saying, this is what happened.
this is why it happened and this is why it’s important to the case. And so the process of basically extracting the discrete observable happenings, if you like, i.e. the events from the documents, assigning relevance based on a list of issues that the attorney themselves provide, which may be taken from the complaint or, you know, from the pleadings, etc. And then using that as the database, which you reason over means that you’re, you know, you’re basically equipped with all of the information you need to build the most compelling case and
You’re not just saying, are some relevant documents. You’re saying, this is what happened and this is why it matters. So it’s kind of, it’s complimentary to rediscovery, right? We are the sort of second level, the deep strategic layer that you take once you’ve got the relevant documents or maybe before rediscovery, right? Client self produces 10,000 documents and said, get back to me by Friday with where we stand. But the output isn’t just, here’s a kind of data document set, which we’ve built up. It’s actually, is the story because litigators tell stories.
they tell stories backed up by the facts.
Greg Lambert (11:21)
Yeah, that’s…
Marlene Gebauer (11:21)
It’s I
think it’s important you mentioned it’s kind of a second step because you know sometimes they’re like you know millions of documents that you’re talking about and I don’t I don’t know how many you know you can accept at one time but you know that step is still critical to kind of just narrow it down to to what’s you know what you want to look at inquiry.
Gregory Mostyn (11:29)
Yeah.
Yeah.
100%. So yeah, can do lot. We can do 250,000 documents, which is, I think, higher than most of the LLM legal platforms I’ve come across. And that’s kind of like AI native processing. And we hope to get to a million by the end of the year. But you’re right. It’s like from the smaller universe. One of my colleagues says, you wouldn’t want a surgeon to be giving you aspirin, right? There’s no point putting a whole custodian’s mailbox in Wexler, because you want to know that what you’re putting in is at least broadly relevant to the case, because
Marlene Gebauer (11:52)
It’s not bad, not bad.
Gregory Mostyn (12:11)
of, you know, it’s really extensive and painstaking what we’re going through and extracting. So yeah, there is that first step, which is critical for the first cull, but actually that sort of smaller universal document turning that into into winning case strategy is critical. And so where we actually get used a lot is for depot prep, trial prep, summary judgments, the briefing, plus early case assessment. And obviously, it’s jurisdictional agnostic as well. So we do arbitration investigations and so on.
Marlene Gebauer (12:36)
So how does it compare to say like one of the general ones because I mean they certainly can be used in that way and do comparisons. how does it differ? What’s the secret sauce?
Gregory Mostyn (12:43)
Yeah.
So I think it’s about that fact extraction pipeline, which I talked about. essentially, it’s the key kind of difference is that what we do is we process the documents, we pass them through this 15 step pipeline, which doesn’t just stuff documents into one context window and say, hey, have go look for X or go look for Y. What it does is it sort of normalizes that data. So even if something is like a one message WhatsApp saying, I flew to Paris last year,
versus something really neatly laid out saying the plaintiff flew to Paris on this year, which obviously you guys are both technically in the weeds here. You’ll know the LLM have a bias to more structured information. And so what happens is if you just chuck documents into a generalist tool, yes, it will give you broad insights, but actually in order to really find the key contemporaneous piece of evidence, the kind of things that trial attorneys are looking for, that smoking gun to help them build the winning case, it’s not always clear.
And so it’s that fact extraction pipeline where we stitch together all these different models into this 15 step process, which rationalizes and elucidates that complex information, normalizes it into structured data, which they can then work over. And we hosted a dinner at Legal Week and one of the innovators there who was, it worked in litigation was saying, litigation is messy. Like it is very, very messy. Things are not clear. Things are obfuscated either deliberately or accidentally.
You’ve got things being overwritten by other things all the time, and you need something that can basically first distill that into a structured way and then use that as the unit of analysis. So that’s one of the key differentiators, obviously scale, which we’ve already touched on. And then we’ve got some other functionality, which is only possible because of this fact bank that we’re building up. So one of them is real time, which is basically like live fact checking for depositions. And it’s only possible because of that database we built up.
We could put it on right now and it could fact check all the things that I’m and probably it wouldn’t show anything. I’m happy to say. So yeah.
Greg Lambert (14:43)
It would let
us know that you flew to New York recently. ⁓ Well, speaking of ⁓ real time and I also want to talk about the digital training, Kim, that you set up. So real time allows, like you said, the litigators to flag contradictions in testimony during a live deposition as it happens. And you have famously said that, you know,
Gregory Mostyn (14:47)
Exactly.
Yeah.
Yeah.
Greg Lambert (15:10)
Truth is a nebulous concept, but contradiction is very verifiable. So how are tools like Wexler and your AI agent, acting more like a digital trainee or a partner rather than just a passive search tool? I’ll let you answer that, and then I want to follow up with a couple more.
Gregory Mostyn (15:13)
Yeah. Yeah. Yeah.
Mm.
I mean, it’s interesting, think…
Basically, what Kim does really well is sift through vast amounts of information and pick out patterns. We don’t opine on the veracity of things, right? We only look at the documents that we’ve been given. So within the universe of documents, are there contradictions against other documents, other facts within the same data set? That’s what we can do. This is how we minimize, basically eliminate hallucination risk, inaccuracies, those kind of things. Doesn’t search on the internet, doesn’t look in its training data.
is absolutely ruthlessly and rigidly told to focus on that information. And also the way we structure all the data into those facts really drives up that accuracy. So as much as it’s a very helpful digital training, as you say, but it’s limited to the four corners of the page of the documents plus the metadata that you’re giving it. So that’s really important. That’s how we drive up the accuracy. And yeah, maybe we’re losing out on some potential functionality, but actually it’s much more important for us to be accurate.
and allow the lawyers to then apply the law to the facts and look on the internet and those kind of things. So yes, truth is a nebulous concept. We can’t say for sure if something is, we can say if something looks dodgy or if something looks odd or there’s fact patterns that don’t quite add up. But if it’s not contradicted by another piece of data within the same data set, that’s not our job. That’s not the AI’s job to do that. And that’s kind of our, that’s kind of our core belief, I think. That’s our thesis. It should be like an accelerant and enhancement. It’s not going to take you three weeks to do this task or three days. It might take you three hours.
But importantly, those three hours will help you get better at understanding the facts of the case so that you can stand up in court or prepare for a deposition and you actually know what’s going on, not just the AIs told me this. So I think, you know, yes, it’s a really helpful digital trainee, but it is only looking at the documents you give it. It’s not looking elsewhere. And it can look for patterns, but it’s not going to sort of lead you to the wrong place. It’s not going to look at internet and it’s not going to hallucinate case law because that’s not in the remit of what it does.
Greg Lambert (17:28)
Do you rely on the LLM to work with these documents? And the reason I ask is, I’ve seen it and I’ve had people come up to me where they may dump a few hundred or a few thousand documents in and then all of a sudden it like introduce characters into that don’t exist.
Gregory Mostyn (17:48)
Yeah.
Greg Lambert (17:50)
⁓
Gregory Mostyn (17:50)
Yeah.
Greg Lambert (17:50)
There’s always like ⁓ a Robert Chin that shows up somewhere. And so how is it that you’re taking the information and making sure that you’re reducing the chance of hallucinations?
Gregory Mostyn (17:54)
Yeah, we have it.
So there’s a few different things. think technically the fact that we’re operating from structured data rather than unstructured data massively drives up the ⁓ accuracy of the system. And this has been tested against other generalist platformer by our customers regularly. So basically because it doesn’t run out of context, it doesn’t have to invent or fabulate people when it’s looking for the kind of smoking gun. Obviously everything, this is kind of table stakes now, but I think you’d be surprised.
everything is sourced down not just to the document but to the sentence that it’s been taken from. And then also we have like extreme guidelines and guidance in the back end to say, you you will turn and I’ll answer if there’s not nothing within the four corners of the document that supports this assertion. Can we say it’s 100 % accurate? Of course not. And any AI vendor telling you it is, you know, you need to go and sell them back the bridge they’ve sold you, right? So
but I think it is highly, highly accurate and more than anything, the verification flow where you can really quickly click on each source and independently verify it is helpful, one, to verify, but two, you’re actually looking at all the documents and you’re getting familiar with the documents as they should be viewed as if they are printed out and you’re finding those patterns too. So yeah, there’s technical ways, which is basically the structured data that fact extraction pipeline I talked about earlier.
And then there’s obviously guidance and there’s also, you know, importance of change management and training, which you two will be obviously working with your attorneys on to ensure that everyone understands that this is a new type of technology. You know, it’s not going to return the same result every time. You can limit the variance, but actually it’s like having a sort of second opinion. It’s like giving, if you gave the same task to a hundred attorneys, they’d probably return you a hundred different results. So yeah.
Marlene Gebauer (19:44)
So in your recent writing, you described 2026 as the year of market chaos, yikes, where traditional law firm knowledge moats have evaporated. Now, if AI can now establish a factual record with 95 % accuracy, save 90 % of a junior associate’s manual review time, what does that do to the traditional economic model of big law?
Gregory Mostyn (19:49)
haha
Mmm.
Look, I think it’s big question and I’m sure I’m not qualified to answer it in full, but I think my thoughts are one, there’s a huge amount of work that’s not, there’s a huge amount of wasted time, which doesn’t build to the client. It’s not valuable time. It’s dead time, basically, which we can massively reduce, meaning people can go home earlier. They can spend more time focusing on strategy and how to actually use this information. So, know, one of our customers, a partner actually, this user said it
Wexl identified an inconsistency. He reckons he would have found it maybe in a couple days, but actually he could spend a couple days planning strategy rather than just churning through documents, right? And actually how to use this in an offensive or defensive way. So he wasn’t losing any billable time, but the time that he was billing was much more valuable for the outcome of the case and in the kind of theater of litigation, that’s what you’re looking for on the upper hand. Increasingly, we are seeing fixed fees with some of our clients, people looking for more kind of value based billing or maybe like a hybrid where it’s like
project-based fixed fee, you get a menu of sort of outputs and actually I was thinking about this like, know, why do you bring in a big law firm if you’re in a bet the company dispute? It’s because you want the reassurance that they’re gonna represent you because it’s the most important thing in your company’s history, or it’s like one of the most important things, depends how big the company is and how litigious they are. But you wanna know that when you bring in like the Quinn and manuals or whoever it is, one of the massive firms,
they are going to go out and fight for you tooth and nail. And I think you can’t really put a price on that. the value system should be reflective more of that rather than just the number of hours of documents that you’re reviewing. Because if Wexler was in a big dispute, I definitely want them to be using AI, but I still want there to be humans there representing my best interest because that’s what you pay for. That’s the kind of assurance. So yeah, I mean, we’re going to see a lot of creative thinking about the billable hour and how this kind of changes in the years to come.
But I think for the short term, there’s a huge amount of wasted dead time which can be reduced and better quality of life for the litigators and better outcomes for the clients. And longer term, I think we need to start thinking more about kind of value based. I think where we work specifically, obviously a lot of the stuff we do is like oral advocacy and standing up in court. And I think people are going to want to prepare for that in the same way that kind of athletes train up for the big race. And I think that’s going to be a really important part of the work that
that AI can help with too.
Marlene Gebauer (22:31)
like your one example that you know, the partner was able to devote more time to strategic thinking. I mean, are you hearing that more broadly? Or was that just kind of an isolated example?
Gregory Mostyn (22:35)
Yeah.
No, absolutely. Like all the time. Like we’ve heard people, we had an example where, you know, they found an inconsistency which their forensic accountant had missed. And then they were able to use that inconsistency throughout the rest of the matter, you know, as a kind of key piece of strategy throughout. I think we’re definitely seeing it’s reducing the kind of grunt work and freeing up time to think bigger picture, you know, think about where this kind of shakes out, what are our most…
what are our best strategies? You can even do adversarial analysis where you talk to the chat assistant and say, okay, you’re the other side now, let’s run through some hypotheses and they can obviously review every single document and you need to know what to say in response. So yeah, we’re definitely hearing that people are ⁓ massively reducing the kind of busy work and it’s freeing up time to think more strategically.
Marlene Gebauer (23:32)
It’s good to hear. mean, are you hearing anything from the client side? Not your client side, but our client side.
Gregory Mostyn (23:38)
The end clients? Well, indirectly, but I know that like, you know, people are, we have clients who also get access to the platform. So without getting too into the weeds, but we’re not seat based, we’re consumption based. So we don’t care how many users there are. So we have, we give access to the clients as well. In the UK, we also give access to the barristers who are independent, the kind of trial attorneys. So yeah, we definitely hear people really like it. Obviously they like better value for money, but also I think they like that we’re, you know, we’re covering all bases here. We’re reviewing every document and you know, my
Marlene Gebauer (23:39)
Yes.
Gregory Mostyn (24:08)
my colleague was a litigator for several years at Mayor Brown and he was saying, you know, I was reviewing documents at 3am in my bed, you know, just like going one, one, yes, no, yes, no. And like, that’s going to be fraught with human error, right? And, you know, I’d probably rather pay for AI to do it anyway, even if it was the exact same number of hours, because, you know, you know that they can cover all the bases. So exactly, it’s not going to be tired. It’s not the end of a long week. doesn’t have
Marlene Gebauer (24:29)
They’re not sleepy, it’s not sleepy.
Gregory Mostyn (24:35)
you know, all the personal things going on that every human does, so, you know, maybe it does, know.
Greg Lambert (24:40)
So, Greg, I’m gonna hit you with a question that’s off script here, but it’s just something that as we’ve gone around and started talking with people, and you probably heard this at Legal Week as well, training. How are you approaching the training? you…
Gregory Mostyn (24:43)
Yeah.
Yeah.
Greg Lambert (24:59)
able to actually leverage the AI to help you kind of learn the AI or what’s the training method that you take for getting people up to speed?
Gregory Mostyn (25:10)
Wexler, yeah, how we train people to understand. So we do a few different things. We obviously do hands-on training. We usually do it by sort of sub-practice group. We also offer top-up training for a specific case. Like, you know, people want to know and then we’ve even done it. We’ve like signed NDAs and we’ve actually got really into the weeds of the case with the customers. Although that’s obviously not our default position because of, you know, confidentiality. So we do a lot of hands-on training. We go in person, you know, London and New York. We do walk-through training and so on and so forth.
What we also do, which I think is really important, is we do a kind of workshop that’s not about Wexler, but it’s about AI in general. And it’s like I was saying to the team, know, we need to convey to even the most sophisticated attorney, like how this technology can both solve a really complex reasoning problem and also not count the number of R’s in strawberry. Like people don’t understand how those two things can be true at the same time. And so
It’s really important that before we even give anyone access to XLR or maybe it’s after, but whatever, while they’re using it, they understand that this is a new type of technology. It’s a kind of pattern matcher. That doesn’t mean it’s not useful. It’s very, very useful, but you need to know those kind of general things to be aware of and what to look out for. So yeah, we do a lot of training. We do a lot of workshops. We do kind of general AI sort of familiarity, know, educational workshops as well.
And we do top ups for individual matters. what often happens is people will be using it regularly and they’ll be like, we’ve got a massive case that’s just come in. We want to do a special training for the attorneys on that. And then we’ll do that too. So yeah.
Greg Lambert (26:45)
Okay, well, ⁓ kind of dovetailing with this, before we get to the crystal ball question, we’ve been asking our guests to share with us some of the resources, newsletters or thinkers that you are relying on to kind of keep you ahead of these compute cycles of these big AI foundational labs. So what helps you keep up with things?
Gregory Mostyn (26:57)
Yeah.
Yeah.
So there’s a brilliant technologist called Benedict Evans. I don’t know if you know him, but he’s a great follower. He’s good because he has a healthy do with his skepticism. And I think he quite rightly says that, yes, it’s incredible technology, but we’re still early and we don’t really know how this is going to shake out. And so it’s quite a healthy antidote to the kind of like…
you know, AGI is coming, all those kind of things. So I like to keep my feet on the ground with him. He has a very, you know, one of the things I always go back to is one of his tweets that he did, which is basically, you know, anyone who says they’ve solved the accuracy problem in AI is lying to you, right? Because it is going to not be 100 % accurate. That’s just the, that’s just, that’s just AI, that’s generative AI. But that doesn’t mean it’s not useful. It’s still unbelievably useful. Like I use
clawed all day, pretty much, and it’s still okay, we’ll throw out a random person every now and then that doesn’t mean it’s not useful. It just means to know you need need to what to look for. So yeah, that plus, yeah, I have a bunch of other sort of think newsletters that land in my inbox each day is obviously Ethan Molek, you’ll know and various others were on the legal side. But I try not to, I try not to read too much about what’s going on. I kind of just like I always try and stay focused on what we’re doing and
and just building something that great to our customers. And I’m always amazed by actually how low the penetration is into lawyers. Like you speak to people, like practicing attorneys, and they’re still, they might have been using Copilot or maybe the firm’s got a Harvey license and they’re having a play around, but they haven’t really used it. And so, yes, there’s so much noise and there’s so much hype and there’s so many VC dollars going into this market, but actually I think we’re still very early.
Marlene Gebauer (28:57)
So Gregory, now it is time for the crystal ball question. and so looking ahead the next three to five years, you know, what’s the single big biggest change that you see coming for the role of, you know, the oral advocate or the trial.
Gregory Mostyn (29:10)
Yeah, so I think this is I kind of already touched on this, but I think it’s going to be about I think the human side of AI is going to be even more important because the sort of document review side is even less important. So the pressure on the human side, so mediations, arbitration, trial depositions is going to be even more important, the pressure will be even higher. So AI that can help you prepare for those is going to be really, really powerful. I think, you know, if you can review every single document in the world, let’s say
And so there’s no, there’s not going to be any more smoking gun that hasn’t already been found. It’s going to be about how well you deliver that argument. It’s going to be about how well you bring the story together. It’s going to be about the way that you interface your client and the way that you bring the context that’s maybe not written down on any pieces of document, on any document into the story and how you bring it all together. And then you win the hearts and minds of the jury or the judge or whoever it is based on the evidence. So I think it’s going to be, you know,
kind of counterpoint is actually it’s going to be even more important and the the oral advocacy is going to be even more important because the document review and the fact-finding is going to be you know largely automated you know within a few years so i think that’s my crystal ball it’s going to be even more important to have great oral advocacy and and lawyers are going to want to train up for those kind of things just as athletes preparing for the big race
Marlene Gebauer (30:30)
to be more about the lawyering.
Gregory Mostyn (30:32)
Exactly.
Greg Lambert (30:33)
All right, well, Greg Mostyn from Wexler. Thank you very much for the conversation. I’ve enjoyed this and thanks for going off script with you a little bit.
Gregory Mostyn (30:43)
No problem, love that.
Marlene Gebauer (30:45)
And thanks to all of you, listeners for listening to the Geek in Review. If you enjoy the show, share it with a colleague. We’d love to hear from you, so reach out to us on LinkedIn and our Substack page.
Gregory Mostyn (30:56)
Awesome.
Greg Lambert (30:56)
And
Greg, where can the audience go to learn more about ⁓ Wexler or about you?
Gregory Mostyn (31:04)
So wexler.ai, you can see it there in our office. Yeah, wxler.ai, very easy to remember. Maybe I did, maybe I did. Exactly, so yeah, head there, you can book a demo, you can book a chat with me. Otherwise, if you’re on LinkedIn, I’m always happy to chat about anything really, so yeah.
Marlene Gebauer (31:09)
Did you do that on purpose? Just kidding. It’s like, yes, I did.
Greg Lambert (31:11)
He’s got a good marketing person.
Marlene Gebauer (31:24)
Terrific. Well, thank you again. and as always, the music you hear is from Jerry David DeCicca. Thank you, Jerry. And bye everybody.
