This week on The Geek in Review, we sit down with Mark Doble, Co-Founder and CEO of Alexi, to discuss the state of AI-driven tools in the legal industry and how they are evolving to meet the needs of modern litigation practices. The conversation begins with a timely debate on measuring productivity in remote work settings. Doble, coming from a background in both law and software engineering, draws intriguing parallels between how legal services and software development measure output and efficiency.

Moving on to Alexi’s core offering, Doble delves into how the platform is currently being used by litigators. He explains that Alexi’s AI technology not only handles the tedious work of research and memo drafting but also provides opportunities for lawyers to explore creative, strategic approaches to cases. By automating routine tasks, Alexi empowers attorneys to focus on high-level legal reasoning and client goals, rather than sifting through mountains of documents.

A key aspect of the discussion centers on the ways in which AI tools, like Alexi, can transform junior associate work. Instead of solely performing rote research or document review, younger lawyers can now leverage these tools as teaching aids, accelerating their path to deeper legal understanding. Doble emphasizes that as automation becomes more sophisticated, the human lawyer’s role in guiding strategy and exercising judgment grows ever more critical.

Doble then addresses concerns around data security and confidentiality. He reiterates that while the underlying technology is evolving, core principles of security remain the same—encrypting data, controlling access, and ensuring that information is never inadvertently trained into the model’s outputs. He acknowledges emerging questions around work product and privilege but sees them as part of the natural adaptation cycle in adopting new technology.

Finally, looking ahead, Doble hints at a significant upcoming announcement from Alexi early next year. He suggests that this new release will push beyond current capabilities, bridging the gap between mere information retrieval and genuine “legal reasoning” support. While keeping details under wraps, Doble leaves listeners with a vision of AI as a true partner in litigation, promising exponential improvements that will redefine how attorneys practice law.

Links:

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠

Twitter: ⁠⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠⁠, or ⁠⁠⁠⁠⁠@glambert
⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠

TRANSCRIPT

Marlene Gebauer (00:07)
Welcome to The Geek in Review the podcast focused on innovative and creative ideas in the legal industry. I’m Marlene Gabauer.

Greg Lambert (00:14)
And I’m Greg Lambert. And on this week’s show, we are joined by Mark Doble a co-founder and CEO of Alexi. Mark, we want to welcome you to the Geek in Review.

Mark Doble (00:25)
Thanks for having me. Looking forward to it.

Marlene Gebauer (00:26)
Welcome.

Greg Lambert (00:27)
All right, so the last few weeks we’ve been kind of shaking things up here a little bit. We wanted to start off the show by having kind of a current topic that we talk about. And Mark, I thought this one would be really good for you because with Alexi, do a lot of legal workflows with the tool. And there was a story that broke in the Washington Post this week.

which talked about a Stanford business grad. So this is a Stanford report, but not the Stanford report that many of us in legal research know. This is a different one. But Blanch, he reported that about 9.5 % of remote working software engineers that he did a study on were basically

producing no work whatsoever. And this caused quite a discussion about the ability to make sure that both that remote workers are being productive and I think it also, I think there’s another conversation that talked about how do you supervise people that you don’t see in the office. I will say this probably about

10 % of workers probably do little to no work anyway. I would think that’s just a guess on my part. Whether they’re home or not, but I think this was pretty stunning. And so we wanted to talk about it. And so Mark, since Alexi is designed to enhance legal workflows, we thought we would kind of take this topic and talk about, do you think there are

Marlene Gebauer (01:55)
Whether they’re home or not.

Greg Lambert (02:17)
tools like workflow tools that can help with this sort of thing.

Mark Doble (02:24)
Yeah, I think it’s especially interesting as it relates to the legal field because the legal field has, as we all know, been kind of trapped in this billable hour model. It’s the number of hours that you work that that’s how you measure the productivity of a worker. And I think it’s, well, it’s probably the best tool we have in the legal field, but certainly in software engineering, it’s definitely not. You can have a very brilliant idea, maybe

once a year and maybe it only took an hour or two hours, it was a moment of insight. And yeah, you’re willing to pay that entire year salary for that person’s contribution. And so it’s definitely at least in our company with our engineers, how we think about measuring productivity, it’s certainly not time-based, there’s an element of that. we want people not to be incentivized just to work more, it’s very easy to gain that sort of system.

and instead really thinking about what do we want as the output? And I think the legal field really has, especially with the advent of AI tools, it’s incumbent on the legal field to start to think more critically about how are we measuring the right output of a lawyer? And as AI tools can do more and more, question is gonna be ever more critical, right? To answer how do we measure

the output we want for this lawyer, for this law firm, for our clients. So answer I think bit more difficult than just identifying the question, there are, and there’s other people way smarter than me on this topic that are looking into it, but specifically with our technology and our product, I think what we do see is that our customers are, it’s not just,

more efficient. It definitely is elevating the quality and the kind of work that they can do at a very material level. It really the kinds of work and allowing people to brainstorm and maybe it’s not the technology or AI that’s coming up with new ideas, but it’s promoting the lawyer to be more creative, to do more things also. think this is

a really good thing. How law firms evaluate, mean, we’ve got some opinions, but I’m sure that lots of people are figuring that out.

Marlene Gebauer (04:47)
Yeah, Greg, think the whole, the reference to engagement I think is critical here. I’ve worked in remote companies for, remote teams for years now, whether that was remote, people were in different offices or remote people were working from home. You know, it varied and you did, you you do run into these problems. Like how do you manage to make sure that people are engaged and people are doing their Now I’ve always worked and I imagine software engineers probably do this

too, they work on a project basis. So they’re working on projects. maybe they’re better at sort of masking whether they’re doing work or not. don’t know. But I’ve always found that you can figure out when, you know, if you have people on a project, you can figure out when things aren’t getting done and why they’re not getting done. And I think it’s imperative that you have these conversations on a regular basis, not just you sort of let this, you know, this type of technical

solution do the work. mean people need to be engaged, you need to be checking in, whether that’s you know daily stand-ups for five minutes or whether that’s weekly meetings, you whatever works for your team. You have those types of conversations and then you start figuring out you know where you know where the chinks are. The other thing is I think some of these tools you have a checklist like here are all the things that need to be done and you can see where the where the blocks are. So if stuff isn’t getting done

done, you see, okay, this isn’t happening and it’s sitting with X. And then it’s like, okay, what’s going on? X, what’s happening here? And figuring that out and moving it along.

Greg Lambert (06:23)
Yeah, I’m just thinking back. I remember a friend of mine who’s CIO at a very large law firm years ago, we would go, all go out for happy hour and he would go, no, I’ve got to be here at 6:30 because the COO is going to call just to make sure my butt’s in the seat. I kind of want to make sure that we don’t overreact to this and go back to this because I know

There’s an ongoing pressure for people to probably either get back into the office on a much more regular basis or to have much more of an accountability. And again, think Mark, you brought up a point I hadn’t really even thought about. we’ve talked on recent shows about value-based billing, that it’s the value that you’re getting, not necessarily the time that’s spent. And so, you know, I think as we look to

bring people in for good reasons that we don’t just do blanket assessments of both what they’re doing in the office and to bringing people back into the office that may not necessarily, that may not be the best solution for them. So it’s gonna be interesting.

Mark Doble (07:36)
Well, there’s also a way probably to take the value-based billing even a step further, right? And then in our company, we organize around pods, product-oriented development, and these are anchored on product KPIs, how much users are using this feature. And if you contribute in material ways to a particular feature that is driving more value for the user as measured by engagement,

then measure that impact on the company regardless of how many hours you worked or it took you to do that. And I’m sure law firms, there’s similar ways to measure, you contributing to the value of the enterprise of the firm by delivering quality service to the clients? And that’s really what should matter.

Marlene Gebauer (08:21)
Exactly. I mean, I think this home versus not home, I mean, it’s kind of a false argument. It’s really just about productivity and value, regardless of whether you’re sitting in an office or whether you’re sitting at home or working remotely. I mean, we work for global organizations and at this point, you would think we could appreciate the need for that flexibility. You would think.

Greg Lambert (08:47)
You would think.

Mark Doble (08:50)
I’m very pro in office though, don’t get me wrong. I do think the best ideas come from being in person often, but there’s a balance. That’s not true for everybody. Some people need space to think and contribute, but me personally and other people that I work with too, like we’re in the office every day and it’s an important part of how we work.

Greg Lambert (09:14)
Well, Mark, thanks for taking this little side journey with us before we jump in and talk about Alexei. So Marlene, you want to start it off?

Mark Doble (09:19)
That’s true.

Marlene Gebauer (09:21)
Yeah, I will. So let’s talk about Alexi. It is a litigation platform that uses AI to help streamline workflows and accelerate research. So could you start, Mark, by telling us more about what Alexi offers and how your customers are using it in their litigation practices?

Mark Doble (09:45)
Yeah, absolutely. core focus really is in litigation, primarily anything involved in disputes. So it’s understanding the evidence, understanding the law, understanding what relevant issues might exist and helping our users to strategize, to brainstorm around achieving their clients’ goals. a lot.

obviously that falls under that umbrella. And we believe that AI can do almost anything that we can imagine it to do now. That’s our starting point. And obviously the truth is certainly somewhere below that, what AI can do. when we start with that as kind of our view, AI can do almost anything under this umbrella, it forces us to get really creative and to build things in a way that

that is matching the long-term trends of how foundational models are improving, but also delivering the most value that we can for our users. so primarily right now, the product is around research. Our very first product was generating research memos. so we were building RAG before RAG was even a thing to generate research memos. This was when it was just really text summarization. There was passage retrieval.

text summarization and combining these into generating research memos. And we started with human lawyers in the loop. We wouldn’t return a memo directly back to our customer right away because we’d have lawyers on our team who were properly identifying this is the right case, this isn’t the right case, or this is the right passage. Ultimately, this is the right answer or no. And we ended up generating a lot of very valuable data that allowed us to eventually fully automate this.

in combination with improved foundational models. And so that was really what got the company started. And then more recently, our focus has been on helping users really understand large volumes of evidence that go along with litigation and ultimately the power being these two. we’ve got, yeah, big announcement early next year on some new capabilities that we think is truly state of the art.

in being able to have AI really help understand your litigation files.

Greg Lambert (12:08)
I’m just curious on the, if I’m a litigator, what’s kind of the typical problem that I’m running into that I want Alexi to solve for me?

Mark Doble (12:21)
Yeah, so there’s a bunch of things. So one is understanding what are the legal issues and how can I figure out what the right legal issues are for me to focus on for your strategy or theory of the case. And so there’s some brainstorming components. this is this iterative process of, here’s this legal issue. Is the law…

kind of on our side or not, should we take this approach or not? Okay, maybe we should actually go down this other issue and then understand again what the law says here. What does the evidence look like? How does that match up? And so it really is organized as this kind of a messaging chat interface that you can generate research memos. You can get answers to questions, quick answers to routine questions, again, generate research memos.

other drafting that we support. And then you can upload all the files, all the documents of the file to get summaries, to do Q &A, to get deeper insights into all of the documentary evidence that might be relevant.

Marlene Gebauer (13:25)
was wondering, Mark, what do you think in terms of these large solutions, more general solutions where you can ask questions and do these things versus something more like Alexi that’s more targeted for litigation purposes? What do you think about the differences between those two types of solutions and where do you see the industry going with

Mark Doble (13:50)
Yeah, well, I do think that there’s fairly, we still think that the litigation is very broad, litigation kind of broadly defined is still a huge area. But it’s sufficiently.

Marlene Gebauer (14:03)
but you’re kind of handling specific tasks as opposed to, could do anything.

Mark Doble (14:07)
Right. Right. Yeah. Yeah, exactly. So I, yeah, I think people still would accuse us of kind of being, being too broad in many ways, but I do think you, especially with, the power of a lot of today’s technology, you can do very specific tasks really well, but you can do very specific tasks, I think even better if properly done in the context of a suite of tools.

especially in litigation, there is an AI that’s assisting with each step of the litigation process, AI is gonna be way more helpful at each step along the way because it has the context of the entire litigation file. Assuming it does have the context of the entire litigation file, it’s going to be our product thesis that maps to our roadmap is that.

Yeah, if this same technology is involved in each step of the way, it’s going to be way more helpful at each step. so the point solution, think it’s right now we’re probably at a unique moment in time where people can build really powerful point solutions, but I don’t think that’s going to last very long. I think having broader context of either the individual users, working style, their patterns, their preferences, or

probably and the understanding of an entire file, tools that have that broad, have that breadth to them are also gonna be more powerful on the specific task also.

Greg Lambert (15:41)
So Mark, you come from a unique background where you have both a law and a software engineering background. So how did that kind of blend your thinking as you were practicing? What kind of advantages do you think it gives you in tackling a topic in the legal tech space?

Mark Doble (16:04)
Yeah, well, I spent probably half my time in law school building tools. Some of the early large language models were just coming out right around that time. And I just was completely fascinated by AI and just how fast it was improving. And then I spent a year working in a law firm. And I think it was probably just enough to understand

Greg Lambert (16:10)
Yeah.

Mark Doble (16:31)
a sufficient amount of what it was really like, like how the sausage was made, so to speak. And I think that was, was probably just enough. And there definitely is a risk that I believe at least that if I had spent much longer, it might’ve been harder to innovate and to build. think it, this is probably not true for all things or legal tech.

applications, but I do think that you need some sort of out of the box, non conventional way of looking at the problem. Although the business of law is so complex, so difficult to understand for people who haven’t experienced it at all, that you need some experience, you need some exposure, but too much can really limit thinking subconsciously, I think. And you’ve come up with all these ways of why it wouldn’t work, but really that the naivety that

might be needed from somebody who hasn’t spent too much time in there is really what helps you kind of push through that and expose some new ways of doing things.

Greg Lambert (17:34)
And you might also become addicted to that paycheck that comes in.

Marlene Gebauer (17:38)
Hahaha

Mark Doble (17:39)
Yeah, that can totally be it. Yeah, life circumstances change, know, and earlier, early in my career was, it was, my wife and I, we just got married, hadn’t had kids yet. She’s like, yeah, makes sense. Let’s go ahead and I’ll try this and see if this works as a company. So, yeah.

Marlene Gebauer (17:59)
Well, I have to say congratulations on Alexi’s recent 11 million series A funding led by Drive Capital. So I think this brings the total funding to over 15 million, is that right? Congratulations. So what are your strategic goals with this new round of funding and how do you plan to use it to enhance Alexi offerings?

Mark Doble (18:13)
That’s correct, yeah, yeah. Thank you.

Yes, we say I appreciate the congratulations. We try not to celebrate funding rounds. It’s it can maybe be some slight measure of success or validation, but certainly in today’s funding market, probably not. There’s there’s so we really care about our our clients, our customers, the value they get out of the product. And so the more value we can deliver.

and that these are quantitative measures, that’s what we focus our celebration around. But it definitely drive capital, amazing investor. They’ve had many successes. And this is also their focus, which has been really good for our company. How do we accurately measure and drive more value in the product? I think this is…

Having investors like Drive and many of the others that have in this recent wave of AI gotten into legal tech, I think has been really good for the culture of innovation and product development within legal. It’s brought in a lot of really good best practices for how you build product, how you innovate and how you do things that a lot of other industries from. But I think this is…

This is a new wave, new level of sophistication, I think, that investors like Drive and many others that are now in legal tech are helping to do. And so that’s really what this new capital is, is to help us build, higher value products, innovating around distribution and product. You know, it’s not just, let’s build as much value as we can.

How do we make it easier for new firms to try this technology, to understand what it is, and to get onboarded to it? And so it’s all across the board that we’re using this money to help just deliver more value to our customers and prospective customers.

Greg Lambert (20:24)
That’s great. So I know, Mark, that Alexi recently introduced some exciting new features like the AI conversational assistant, and there’s a document analysis tool. So do you mind just kind of talking more about the features of what these tools do and how they help litigators in their day-to-day work?

Mark Doble (20:49)
Yeah, they’re all along the same theme of just helping users understand the evidence that they have that they might need and the law on a given file that they’re working on. And so all of our features are really geared towards so that’s exactly it, being able to easily integrate documents, summarize them, ask very detailed questions.

the conversational interface is a way for us to abstract away from all these different agents that are working behind the scenes. For each user, we really wanted to avoid this kind of module feel to it that, you’ve got this drafting module over here, you’ve got this research module, you’ve got this doc upload summarization module or functionality or feature over here, and everything is accessible.

through an intelligent single interface that right now at least it’s a single user and an AI agent behind the scenes. There’s dozens of agents that are working depending on the task. I would say kind of just more broadly, we have this view that regardless of how much better these foundational models get, there’s these debates, are we hitting these scaling laws?

we hitting limits for how much better these foundational models are going to get? It’s very clear that we’re still at the beginning of what’s possible with these models, in particular around a few key concepts. One around scaffolding. Let’s design a bunch of different models that all connect together, that validate each other, that do a particular task. And then you build up this mesh work of models that

It looks like this very complex task from the beginning that a single model with a single, maybe one or two prompts chained together, even that’s not gonna do it. But when you have this mesh work, this scaffolding of tool use, maybe it’s gonna look something up on the internet, maybe it’s validating some other component, maybe it’s generating some other data that’s needed to do this other task and all of these things together. Now, all of a sudden, this single, not that smart model

as a system is doing incredibly powerful things. And, and again, we’re just at the beginning of this, scaffolding tool use, all of this, this chain of thought. and there there’s another concept that, that again, this is kind of the kernel of this idea that we’re announcing early next year that I think is, is truly, it’s, it’s going to be mind blowing and really lead to this advanced legal reasoning capability that we’re all kind of waiting for in, in an AI system. And,

It’s very exciting, but it’s all geared towards around helping litigators get the result they want for their client faster, more affordably. Yeah.

Greg Lambert (23:47)
And Mark, what kind of reaction are you getting from your users on having this more singular interface rather than a modular style that you see in a lot of products?

Mark Doble (23:59)
think some people want perhaps a little bit more control and there is this question about discoverability. I didn’t even know you could do that. And our feedback is, well, why didn’t you ask? You could have typed in, can you do this? And it would have said, I can do this or I can’t do this. But I think when, as capability continues to evolve, you can’t just keep adding links to a dropdown. You just can’t. And so you have to have

an intelligent orchestration of all these agents performing all of these discrete tasks when combined together, doing really what you want. And that has to come through an elegant single interface, single experience that does all that orchestration behind the scenes for you.

Greg Lambert (24:46)
I agree with you. think that’s probably the wave of at least the foreseeable future as I think you’re going to see much more of a simpler interface that guides you rather than a drop down from a hamburger link that gives you a series of potentials.

Marlene Gebauer (24:49)
Yeah.

Mark Doble (25:02)
Right.

Marlene Gebauer (25:04)
People get tired of clicking. They get tired of all these multiple clicks and having to figure out where they go. So yeah, I agree. Yeah, we’ll just talk.

Greg Lambert (25:08)
Yeah, pretty soon we’ll just do away with the mouse and the keyboard altogether and we’ll just talk to it.

Mark Doble (25:13)
Well, you never know. You never know.

Marlene Gebauer (25:18)
So, no, no, good.

Mark Doble (25:18)
It’s also around, sorry, one other point also around prompting, right? I think there’s, we’ve had many users say like, I want to kind of preset or pre-fill prompts and there’s many products that do that, right? And I think, is an important idea in there that you need to learn how to interact with these AI systems. But I think the more that we can,

think of it in those general terms that I just need to understand how to communicate with it. I need to help it learn how to communicate with me and we need to learn together to do this. think it’s not right or I don’t think it’s long term lawyers are not going to be thinking about prompting. don’t think it’s going to be as vendors get better at building tools, it’s going to feel much more conversational and.

That’s at least what we’re striving to do to kind of take that more technical product cognitive load off of the user and just make it feel like you’re just getting work done. that’s what we’re striving to do. So yeah.

Marlene Gebauer (26:27)
Yeah, I mean, you raise a really good point that oftentimes people sort of have that blank sheet effect where it’s like, I know this is gonna be helpful, but I don’t really know how to get started. And I think sort of a better communicative aspect, I think is gonna make a huge difference for people like that.

Speaking of speed, Alexi claims that it can automate up to 80 % of routine tasks for Can you share a specific example of how Alexi can transform a traditionally very time-consuming task or even like a series of into something more efficient that’s part of the AI-driven process?

Mark Doble (27:12)
Yeah, totally. well, one, I would say kind of a very simple one is our memos product, our memos capability, would argue when we benchmark against the others out there, the co-counsel Alexis, I think quite clearly there’s advancements that we’ve

on average, and we hear this from our customers too, that it’s more accurate, more comprehensive in the market. I think there’s several reasons for that. One very simple one is just focusing on retrieval, getting the cases right. If you do this really well, then the whole generative stuff that goes along with putting a memo together, that’s actually the easy part. The hardest part is identifying which cases are

most authoritative, most properly responsive to this question. so we were doing agentic rag before anybody even knew what agentic rag was, right? And so you identify certain cases and then you build an AI agent to go and read that case and say, is this case actually responsive or is this section that the model’s identified as being responsive to this question?

actually correct or is it from a descent or all of these things, right? And if not, you throw away the so you’re having these models practically read the cases for you and decide, they relevant to include in the memo? And even the concept once you identify maybe there’s five or seven cases that are helpful or responsive to this,

Now go and have the model read all the cases that were referenced in those cases. Maybe there’s additional cases that are now out there that this is what a human human lawyer does when they’re researching, right? And now bring these cases into the mix. Do these help how we understand the law? And then take those original five cases that were were referenced and go look up those. Are there any other cases that actually talk about those cases? You know, and there’s this there’s this recursive

looping you can do. And this is very much an agentic process, right? That really helps this retrieval step of identifying the cases that are actually responsive to a question. And as long as you’re doing each one of these steps accurate, reliably, and you measure and evaluate and ensure that that’s the case, then this removes a significant amount of legwork for users to properly identify the key cases that are responsive to a question that they might have. And so

on, on this particular, task at least, yeah, it’s, it’s easily 80 % of, of a lot of this work that was done maybe a couple of years ago, that, has, has been entirely automated.

Marlene Gebauer (30:06)
It’s really impressive.

Greg Lambert (30:08)
Yeah. Well, anytime we talk about efficiency, think, you know, in an industry where we’re billing by the hour and we’ve had a history of

kind of raising our associates up on learning, kind of teething on reading all of those cases, understanding the entire process. So as AI tools like Alexi come in and kind of evolve how we’re handling the research, the memo writing, things like that, how do you see the roles of junior associates changing? how do you see tools like Alexi

helping them, because I think a lot of them will say, my god, I don’t want so much work taken away from me. How am going to build, how am going to work, or how am I going to learn? what would you say to that, to the junior associate?

Mark Doble (31:04)
Yeah, well, there’s no doubt there’s a uniquely human role. I think in the litigation process, there’s a role that can only be done by a human. And I think even if you remove the limits of capabilities of AI, this is still true, that there’s something fundamental about being human that is relevant to the litigation process when it comes to rights and liabilities and

disputes with other humans, other entities, there’s something fundamentally human about that, that you need humans actively involved in mediating this process. there will be this process of figuring out exactly what that is, where is that role? advise young attorneys to…

Try and think about that, figure out what that is and really develop those skills, focus on those skills, helping clients achieve their goals. The more you can really focus on that, which by the way, involves learning these technologies. How can I leverage these technologies the best that I possibly can so that I know how they’re going to impact the client, how they can be leveraged to best impact the client? mean, that’s clear that these next…

new attorneys, all attorneys need to really understand how to use these tools if they want to provide the best value to their clients. But also just understanding that unique human element to it. And there’s something certainly subjective about achieving a goal, right? Achieving a goal in a legal dispute that some legal outcome is going to further a client’s goal. Knowing where that wall for the human is and becoming really good at that is going to be

is going to be key for the successful litigators of the future.

Greg Lambert (32:58)
And we can’t talk AI, of course, without talking security. So what do you do when you go through a security audit? do you do? Or how are you calming the nerves of the security ops folks at law firms and other places when you talk about putting all of that information into one system?

Marlene Gebauer (33:03)
All right.

Mark Doble (33:22)
Well, I think a really good place to start is with everything that’s new with this recent wave of AI, when it comes to security, there’s so much that remains the same. know, it’s around following where the data goes. Is it okay if it goes there or not? Is it encrypted? What levels of encryption? At rest, in transit. These things don’t really change. Who has access to the data?

All of the kind of first principles of security are still true. Understanding how the models work, if data can get trained into models, that’s new, it’s important to understand that. just, yeah, safeguarding where the data can go, right? And it’s really just these models, if the data can get into training, can get into the outputs of models.

understanding that and I think it’s pretty clear now where that can happen and where it can’t and we can be pretty confident where it can’t. so, yeah, for the most part, think now we’re, the security confidentiality, think through a big portion of it. There are some unanswered questions, I think, around litigation privilege, know, the work products of AI agents and systems like this and…

So there are some other novel questions that will get answered I think, but for the most part things haven’t actually changed as much as the technology would suggest that they have.

Marlene Gebauer (34:53)
So, we always end things with our crystal ball So looking ahead, what do you see as the next frontier in AI for litigation and how is Alexi positioning itself to lead in that area?

Mark Doble (35:11)
there, well, there’s, there’s, I mean, I would, I would love to talk about this for, you know, an hour, but, but if we can do this again, maybe next year, I would, there’s so much more that, 20, 25 is going to be,

Marlene Gebauer (35:11)
Big question.

Yeah.

Mark Doble (35:31)
unbelievably significant. think we’re our core thesis is that we are, you know, I mentioned this earlier, and a lot of people, I was at a conference not too long ago, and I said, I painted two paths of AI development was was ChatGPT kind of a step function improvement. And it was all of a sudden, we had this powerful new technology. But but it was really just a step function, or are we on this continuing path of improvement and

Chat2pt really was just kind of this threshold moment that we just kind of crossed the threshold. And almost 90 % of the room said it was just a step function improvement, which would imply that, well, it could be another decade or two decades before we get a similar, another step function improvement, that there was a lot of work leading up to it. And then all of sudden we have this huge breakthrough. But I really don’t think that’s what’s happening.

Greg Lambert (36:23)
going say you’re on the 10 % that thinks that, you know.

Mark Doble (36:27)
Yeah, a hundred. And I think, I think at the end of, so this was the beginning of a talk that I had to try and convince everybody why we were actually on an exponential curve. And I think, I think it might’ve reversed. I should’ve pulled them at the end, but I think, I think there’s, there’s far more very compelling reasons for why we are very, very much on an exponential curve. and which means that, that if we are on an exponential curve on average every year,

the progress is more than the previous year, right? You can’t, and it’s really hard to look back and to say, okay, we’re gonna have that same amount of improvement into the future, because it’s exponential, it’s really hard to predict. It just keeps happening faster and faster, more and more. And I really do believe that that’s the path that we’re on. Even if people said that these foundational models were kind of at the limit, scaling limit, know, data, compute, all of these scaling laws, they’re saying we can’t just keep.

scaling up, which actually might not even be true. But even if it were true, there’s still so much to do with what people call this, we’re huge proponents of this concept of un-hobbling and scaffolding models, know, chaining models, chain of thought. Again, there’s a very, I think a really important insight that in this new system that we’ve designed that we’re

we’re releasing, providing a lot more information around early next year that, I really do think that advanced legal reasoning is this emergent property of this system that we’re launching. And again, this still is just the beginning of what’s coming. so there’s a lot. The only challenge is when certain things are going to happen.

and, but again, I think, I think that there’s, there’s a critical role for the human lawyer, the human involved in the process. think. The, our, our, our legal system is fundamentally, human and, and, and, built with, with humans in mind. I think it, it’ll, I mean, maybe this will change and I’ll be, proven wrong, but, I really do think that humans are core to the delivery of the system. but AI is gonna, gonna.

facilitate and change the whole system quite radically, but I think it’ll be very hard for humans to no longer be at the center of it.

Greg Lambert (38:57)
Yeah, I could tell you were really restrained because I think next month you’ve got a bigger story to tell, don’t you?

Marlene Gebauer (38:57)
Yeah.

Hahaha

Mark Doble (39:06)
Yeah, we do, certainly. Yeah. Yeah.

Greg Lambert (39:10)
Well, Mark Doble from Alexi, we want to thank you very much for coming in and talking with us today on the Geek and Review.

Mark Doble (39:18)
Thank you very much for having me. Yeah, it’s a pleasure to meet you both.

Marlene Gebauer (39:23)
And of course, thanks to all of you, our listeners, for taking the time to listen to the Geek in Review podcast. If you enjoyed the show, share it with a colleague. We’d love to hear from you, so reach out to us on LinkedIn.

Greg Lambert (39:34)
And Mark, we’ll make sure that we put links in the show notes, but what’s the best way for people to reach out and find more about Alexi or to reach out to you if they have questions?

Mark Doble (39:47)
Yeah, so alexi.com is our website and you can email me mark and alexi.com. We’re also across LinkedIn, Twitter X and yeah, feel free to reach out and happy to chat.

Greg Lambert (40:03)
Are you on blue sky yet?

Mark Doble (40:06)
I think we might be. I haven’t been involved too much in the social media stuff, but yeah, we might be. Yeah.

Marlene Gebauer (40:12)
Cool. And as always, everyone, the music you hear is from Jerry David DeCicca Thank you, Jerry. Okay, bye.

Greg Lambert (40:19)
All right, thanks everyone. See you later.