Nikki Shaver (00:00)
Hi Greg and Marlene, I’m coming to you live from the English countryside. Really want to let all of your viewers and listeners hear about something that I think will be really interesting for them. One of the best ways of course to stay on top of technology in this crowded market is to see demos of new products and new product features. All of us think about new products, but of course the products that are out there also because it’s easier to build with AI.
are moving faster, evolving faster than ever before and dropping new features all the time. So it’s also important to stay on top of that. But of course, it takes time to individually reach out to vendors. And also you may not necessarily know what’s new and worth looking or perhaps you don’t want to commit to a relationship with a vendor right now by reaching out and opening up that dialogue, but you’d still like a sneak peek at the solution.
Why not let us at Legaltech Hub take care of that for you? We are organizing what we call demo dozens. These are sessions where we get 12 vendors to come in and show us an update on their product or the first demo of their product at their early stage. You get to register for free, come along, you get the schedule ahead of time so you can pop in for as many or as few as you’d like to see. get a recording afterwards. Great way to stay on top of the market.
go to legaltechnologyhub.com, look for the events dropdown on the top menu, go to LTH events and you’ll see the link to register for the next demo dozen which is coming up on May 19th. And stay tuned, we’ll be doing these on a regular basis.
Marlene Gebauer (01:42)
Welcome to The Geek in Review, the podcast focused on innovative and creative ideas in the legal industry. I’m Marlene Gebauer.
Greg Lambert (01:49)
And I’m Greg Lambert. And Marlene, we’ve seen over the past few months where the AI market has felt like it’s just gone through all of these shifts. And even when companies like foundational model companies like Anthropic, know.
create a legal plug-in and it causes a huge disruption in the market. And yet we’re still seeing specialist vertical startups that are raising massive rounds to prove that these foundational models can’t necessarily solve it all.
Marlene Gebauer (02:18)
Yeah, that’s right. You know, we’re currently in a high stakes debate over whether the future of law belongs to the one model for everything giants or the specialized scalpel designed for specific practice areas. And our guest today is right at the epicenter of this debate.
Greg Lambert (02:36)
We’re thrilled to welcome Andrew Thompson, the CTO of Orbital. Andrew is a lifelong geek. We love that. Who’s an engineer and has over two decades of experience in a career that spans across fintech to prop tech. And now he’s diving into legal AI.
Marlene Gebauer (02:54)
Today he leads the technical vision for Orbital, ⁓ a real estate legal AI platform that has processed over 200,000 transactions and recently closed a massive 60 million series B round to fuel its expansion into the US market. Andrew, welcome to the Geek in Review.
Andrew Thompson (03:13)
Thank you very much for having me, Marlene and Greg.
Greg Lambert (03:15)
All right, Andrew, before we jump into our questions and talk about your work there at Orbital, would you mind giving our listeners a bit of an elevator talk on what Orbital does and who your main clients are and how they use your product?
Andrew Thompson (03:30)
Yeah, absolutely. So, Orbital is an AI built specifically for real estate. That’s specialism is what sort of sets us apart from sort of generalist legal tech and foundation model plugins. We go really deep on that domain. Our team combines sort of real estate lawyers and technical experts who sort of have first-hand experience with real estate legal work and property transactions. And we play in this space sort of with real estate that is worth about 140 billion.
And sort of some of our clients include Clifford Chance, Seyfarth Shaw, Grosvenor, Vinson & Elkins. and many more. We did about 200,000 property transactions last year in the US and the UK.
Greg Lambert (04:10)
I’ve talked to a number of real estate lawyers who saw your demo on Legaltech Hub.
They were really impressed ⁓ by what they saw there. So Andrew, let’s talk about you for a second. Your path to becoming CTO as a major legal tech player hasn’t been exactly a straight path, which is not unusual in this industry. So how did the operational discipline and the focus on the unsexy parts of efficiency in the service industry help shape your
Andrew Thompson (04:18)
Thank you.
Greg Lambert (04:45)
approach to building this agentic system for real estate lawyers here at Orbital.
Andrew Thompson (04:51)
Okay, great question Greg. So let me start with sort of the first part and then I’ll go into the second part of sort of the agentic bit. So yeah, my career is a little bit sort of nonlinear. I’ve been a software engineer, but always sort of really early on in my career was always well aware that if you point software engineers at the wrong problem, that doesn’t turn out well. So I’ve always sort of been an engineer at heart.
Greg Lambert (05:13)
They create really
cool stuff that has no practical effect. Yes, I know I’ve been there.
Andrew Thompson (05:16)
Look, yeah, yeah, that happens a lot. So
Marlene Gebauer (05:17)
Hahaha.
Andrew Thompson (05:19)
I was always being like, I’m going to be technical, but like, how do we make sure that we actually build a product customers want? And that’s been sort of one of the key threads throughout my career. I’ve been involved, interestingly, in Canada in a prop tech business and then here in London. And prior to coming to Orbital, I was at a business called Appear Here. And it was essentially an Airbnb for sort of Main Street properties.
So we would go to landlords in Paris, New York, London, Amsterdam, Milan, Chicago, find sort of spaces that we can put on a platform and sell or sort of rent out on the short term to brands like Chanel, Dior, Nike, Adidas. And so sort of, had engineers, I had product people, but I also had real estate brokers on my team, which is very different from a CTO. And the reason we were doing that is we would have brands that come to us and…
sort of tell us what exact spaces they want in the best cities in the world, what shop frontage, how much square foot, what they were potentially willing to pay, and I would send brokers out into Main Street to go and find those spaces. Once we found them, then we needed to transact them really quickly. And as you can imagine, that sort of obviously became a bit of a painful process. Those spaces…
Greg Lambert (06:24)
Yeah, I see
the problem in that process, the quickly part. ⁓
Andrew Thompson (06:28)
Exactly. Yeah.
And so, you know, the quicker we could, you know, get the leasing done on that and get the space on our platform once we found it, it was obviously a competitive market for some of the best spaces. And so I was dealing with real estate lawyers day in and day out there as well in order to sort of provide value to our business. So sort of coming into Orbital, I understood the sort of unique challenge that that had and sort of maybe what was required. This is prior to the whole new current AI revolution. You know, we were building kind of
machine learning or deep learning when I first started with the business over five and a half years ago. And so sort of that’s my sort of non-linear path that I’ve always been at this intersection between both the technology but also sort of trying to get to the heart of really the customer and often the physical environment to sort of play a sort of key part. So sort of the second part of your question, how does that actually fit into building these agentic systems? So, you know, most people have this experience, ChatGPT comes out.
You have this sort of like, wow, this is amazing moment. How do I actually use this fundamental new technology to sort of build product at our company? And so maybe I’ll, we have mantras inside our organization, the sort of principles that you sort of focus on and to sort of, again, whenever something new comes out, I’ve seen going from on-premise software to the cloud, from desktop to mobile, there’s always these intuitions that are very right going from one paradigm shift to the next.
But there’s always some intuitions that are very, very wrong. Like if you just imagine mobile, I know, let’s take a giant website and shrink it down to the tiny bit of your screen. That’ll work for customers. And clearly that didn’t. You needed to reimagine sort of the UX. And so in this sort of agentic paradigm, you sort of have to operate differently. This, I tell you, remember when, you know, GPT 3.5 blew everybody’s socks off, and if you squinted, you could see it doing a little bit of legal reasoning, but you if you…
If you sort of pushed it any further than one or two prompts, you realized it would sort of fall over. But it was that sort of insight. One of the mantras we have is sort of betting on the model. So this idea of like, regardless of where it is today, sort of if you looked in your crystal ball, where can you imagine that these models are gonna go cost-wise, intelligence-wise, capability-wise, speed-wise, and then try to build a product for that? And so ultimately, sort of trying to figure out
You know, every company wants to build a product that customers love and you want to get out there and be first to market. But when a brand new technology comes out, like some of these AI models that are, you know, at least in the early days, very rough around the edges, you kind of need to figure out what is the shape of the product that customers actually want, knowing that as it got better over the next six, 12, 18 months, like, you know, we’ve all seen that the sort of product will build into that. I think there’s, you know, it’s obviously hasn’t been a massive long time already since sort of ChatGPT and AI has been out, but there’s already been lots of
startups that have sort of built something very impressive and then disappeared into the ether. And I think the game you need to play is like betting on the model, build something that’s valuable today, but also in six months and 12 months. And that is surprisingly different from building SaaS software. And I think that’s, you know, to your preamble right at the beginning, I think that’s what a lot of the market is reacting to. That’s a lot of what software businesses are trying to grapple with.
Marlene Gebauer (09:34)
So I want to set the stage for the next next question, Andrew. ⁓ In early 2026, we saw this massive market panic after Anthropic launched a legal plugin and investors were fearing that, you know, middlemen and legal tech were going to be dismantled overnight. You’ve been quite vocal that this one model for everything narrative is really missing something fundamental about the legal market. So why do you believe that
practice area AI is ultimately going to win out over generalist tools and what does a specialized platform like Orbital provide that a general purpose model like Claude or chat GPT simply can’t?
Andrew Thompson (10:15)
Yeah, it’s a great question, but it’s also pretty much like the question I think most people are sort of grappling with it at the moment. Yeah. You obviously sort of highlighted legal, but I see this everywhere. This idea that, you know, AI labs are building better and better general intelligence. What was bleeding edge two years ago, 18 months ago, even a year ago or six months ago, and now kind of table stakes, everybody can do it. And it’s sort of…
Marlene Gebauer (10:21)
Ha ha.
Greg Lambert (10:22)
Tell us what the secret sauce is.
Marlene Gebauer (10:24)
What is it? We want to know.
Andrew Thompson (10:43)
Ultimately, is everything going to be consumed or is there a way to build software where you’re constantly sort of a level above what the AI labs are of churning out with their models? And so sort of with that in mind, you’ve obviously articulated there, I think about it sort of this barbell approach. It’s not sort of an I can either or. I think that’s ultimately the hollow sort of not being opinionated in the middle and trying to be, leverage a little bit of the generic, but only go so far deep into sort of the solving of
problem deeply for your customers, you kind of want both on sort of either side. And so maybe let’s sort of, instead of just purely talking about book, let’s take a step back and just think about sort of not thinking about AI. But if you have like such clever people, and again, I’m gonna anthropomorphize slightly here, so sort of take away the pinch of salt. But there’s a lot of smart people, let’s just take for example, a bunch of PhDs, they’re not necessarily great at real estate law right out of the gate.
There’s sort of a whole bunch of training and tools that they need and it’s sort of on the job experience that sort of intelligence as a precursor is obviously really good. The more intelligent lawyers often are sort of really good at what they do, but there’s a whole sort of category of stuff that comes after that. sort of using that as a bit of an analogy from this idea of sort of the generalized intelligence from the sort of vertical ones. You can sort of start with something general and build something sort of light on top. And yeah, you’re a little bit better than the general, but you’re almost…
at threat is as each time they get better and better, you very quickly get eclipsed. And we’ve obviously sort of, you know, very much seen this ⁓ recently over the last few months. And so I think the way that AI sort of changes this is that, you know, from this analogy with humans, AI already has all of this sort of legal and real estate knowledge built in. And so that’s why it’s a little different than humans, right? Humans…
do take years to train and sort of years, sometimes decades on the job experience to gain all that experience. A lot of that, not everything, a lot of that is sort of built inside the model. And so it’s sort of how do you leverage that model? How do you prompt it? How do you instruct it to think like in our case, a real estate lawyer, and then what specific tools do you need to give them in order to be sort of vastly better than the sort of general tools? And I think that’s ultimately the, when I talked about sort of the question.
That’s ultimately, I think, the rough playbook that a lot of companies like us and others are sort of thinking about from a sort software engineering perspective. But then I think if you, you know, just let’s take a step back from AI and software engineering and products. If you just look at what our customers, other people’s customers on the ground, everybody’s enamored with AI. They get a lot of value out of it. And oftentimes you can get 80 % there. I’m just sort of cherry picking here, but 80 % of the solution.
but that last 20 % is really hard. It requires not just prompting the model better, it requires sort of proprietary knowledge or some of the tools that you’re giving it. They need to be hyper specialized for sort of the real estate practice area. And so sort of that’s ultimately for us, we will continue to sort of leverage the AR models and we’re expecting them to get, you know.
a little bit better, twice as better, doubly as better, 10x better over the coming years. But so long as the tools and the data on top of those models, will leverage to solve our customers’ problems in a truly deep way that these generalized systems aren’t able to do.
Marlene Gebauer (14:11)
I have a sort of out of left field question. So I’ll just throw it out there and see what happens. Do you think this kind of debate in terms of point solutions versus the general solutions, is that going to have any impact in terms of what wins out, unlike the energy consumption that we are all thinking about in terms of using GenAI models?
Andrew Thompson (14:14)
Go for it.
So you’re saying it’s like, will the energy consumption be different if you’re using a general model or a…
Marlene Gebauer (14:44)
Yeah, will it have a positive impact?
Andrew Thompson (14:46)
I don’t think I’m educated enough to know categorically the answer on that actually. Like was just thinking ultimately if we sort of from first principles what really matters here? The kind of unit of measurement or tokens. The more tokens you use, the more GPUs are crunching them, the more energy you’re using. And so ultimately I think there’s from an application developer’s perspective, the less tokens you use, the less power you use.
I think you could probably argue that there might be a slightly, you’d be more efficient if you really understood the domain and with real estate there are sometimes a huge amount of documents. We’re talking a portfolio of a thousand properties and each property has 10 documents and each side of those documents are 50 to 100 pages. You ultimately do need to read every single word or quote unquote token on those pages and so you are still sort of burning some of the power there.
I think ultimately where the efficiency comes from more is sort of a layer below the application stack. Obviously some application developers can be wasteful, but most of the time we are not wasteful because we have to sort of spend the money on tokens. And so I think, you know, we’ve already seen cost reductions in tokens. And I think partly that comes from Nvidia’s GPUs getting better and more energy efficient, as well as the sort of AI labs producing sort of
better algorithms that are vastly more efficient. I think that’s ultimately where the gains are mostly gonna come from. That would be my sort of just thinking on the spot response to that.
Greg Lambert (16:11)
I like that question though. Maybe one of the PhDs that’s listening to this can write a paper on it, Marlene. Andrew, I want to ⁓ talk about real estate itself. We talk about real estate being the largest asset in the world, class in the world.
Marlene Gebauer (16:12)
And we, go ahead.
Andrew Thompson (16:14)
Hmm.
Greg Lambert (16:33)
And yet the work, and I know Marlene and I have both seen this from our real estate lawyers, is still very manual, it’s fragmented, there’s all kinds of nuance in it. people…
may actually say that it really hasn’t changed that much in the last couple hundred years. So you focused Orbital on the spatial visualization and connecting deeds to the physical reality of the land. And I’ve actually seen demos of this and it’s really, really interesting how you do the overlays on it and you’re working with the land.
⁓ itself on those maps. So you might be talking about, can you explain the technical challenges of teaching the AI to kind of read the property boundaries and the historical maps and how the spatial intelligence actually speeds up the property deal?
Andrew Thompson (17:21)
Mmm.
Yeah, absolutely. So, maybe I’ll sort of touch on a few of the challenges. There’s quite a lot in real estate. You lot of people are used to document heavy, lot of text, and then reasoning over that, and now there’s this sort of visual and spatial layer. So I think, first and foremost, there’s a reasoning challenge on sort of turning what is typically sort of textual legal description of the property into a collection of essentially points and lines and curves. And you then need to sort of
create the property boundary from that. So you can imagine, I’m just going to cherry pick something that’s often sort of quite slightly humorous, there’ll be written in the technical description, go to the tree on the corner, walk 20 paces, ⁓ turn 30 degrees, and you’re sort of mapping this all out. That is written sometimes in a beautifully photocopied document, sometimes not so much so. So you first have to extract that, you then have to sort of interpret that and make sure that sort of…
the lines and points are correct and then actually try to sort of etch that in a more sort of classical way without using sort of AI. You’ve sort of done the extraction and now you’re sort of plotting it on a map. I think there’s visual challenges off the back of this. A lot of people, you know, we all know the word LLMs, large language models, people have talked about VLMs, visual language models. They are kind of quite a few steps behind.
where we are intelligence-wise with large language models. You know, can all take a picture of our fridge and tell her, do I want for dinner tonight? But if you take a picture of a very complex real estate survey or plan, you know, it just sort of falls over. And so you need sort of a combination of these VLMs analyzing, but then you also need to sort of drop down into more sort of classical machine learning or computer vision techniques that you have. And this is all orchestrated by AI. So much like a human can come in and, you know,
look at the legal description on the, sort let’s just say the title commitment, and then they can go and sort of plot that manually on a map. You can now get AI to do that all automatically, where it’s sort of piecing all of those pieces together that typically would take hours for a human to do manually.
Greg Lambert (19:35)
Yeah, I am so glad you mentioned VLM’s because I’ve been aching to talk to somebody about this since I played around with some a couple of weeks ago. And just to kind of talk about the difference between the large language models and the visual language models, I actually did a test a couple of weeks ago where I threw a scan document at both and Claude actually gave me a really good explanation. I think I used the
Andrew Thompson (19:56)
Mm-hmm.
Greg Lambert (20:02)
when VLM modeled to do the visual.
And I asked it, know, kind of what was the difference and the response, and I wanted to kind of verify this with you, Andrew, that I got back from Claude is that the large language models can look at a document, can look at an image, and can kind of give you the explanation of what’s there. It can kind of tell you, you know, this is your refrigerator, I see some milk, I see some, you know, some apples or whatever. ⁓
The
visual language model can actually explain everything that’s on the image. Here’s the text, here’s the handwritten text, here’s the image of the apple and the milk. But it can’t necessarily give you the explanation of how it all gets put together. So how do you use the combination of the two?
Andrew Thompson (20:55)
Hmm.
Marlene Gebauer (20:59)
I’m also curious the difference between how they’re trained. You know, is one trained on sort of text, but it understands visuals, the other trained on visual that you can like, how does, what is the difference?
Andrew Thompson (21:11)
Yeah, let’s start with that question. I obviously don’t, we don’t train sort of our own vision VLM. But I believe, you know, all of these systems, they need training data. When you look at the LLMs, they’re fed on the internet, they’re fed on books. There’s, you know, multi-billion pound companies producing label data with doctors and lawyers and software engineers, and you’re ingesting code bases. And so that’s its training data. And then you need to sort of layer sort of reinforcement data to sort of steer it in the right direction.
I believe the approach is very similar for VLMs. And so you need sort of images and then you need sort of textual explanations on top of that. That’s either historic data that’s just always been there and you can sort of the AI labs can sort of scoop that up and include that in training data much like they do the internet. Or I don’t know this for sure, but I imagine they’re trying to get their hands on lots of images and then getting sort of, quote unquote, experts
to look over them and produce a bit of a write-up. And then you can kind of understand the sort of what’s happening in the image, what does it have? You can understand some of the more sort of domain specific elements of that image. Like again, just back to this trivial example of a fridge, you might have a chef with a picture of a fridge, then go, okay, well you can make spaghetti bolognese tonight, or maybe you can make a salad based on those ingredients, but not much else. And so I think…
All of that is getting sort of sucked in from a sort of training perspective to make these better and better. The more data you have, the better the VLMs get, and the better quality of that data, the better the VLMs get. I think the reason they are behind is that we just don’t have a lot of that data in the world, relative to all the text. You think about newspapers, you think about even this recording and it’s getting turned into a transcript, or you think about, you know, code.
that is open source sitting in GitHub, all of that is rich. Let’s take our domain, very detailed, real estate specific, legal images that are buried in 100 page PDFs that are private information. The Model Labs don’t have access to that and so they just can’t reason well over those images yet. To your question, Greg, just remind me what that was again.
Greg Lambert (23:16)
when you’re combining the use of it, how are you kind of bridging the VLM information and the LL information to get, instead of getting the one plus one equals two, you’re getting the sums greater than the individual parts.
Andrew Thompson (23:31)
Yeah.
Great idea. I think there’s been a wholesale, you know, again, let’s just take the textual side. Prior to ChatGPT, most people doing, you know, of LegalTech 1.0 were using classical machine learning, and pretty much there are some players who are not doing this quite yet, but most people have just wholesale moved over to using LLMs. I think we’re still in a world where in order to kind of maximize accuracy, trust, minimize hallucinations,
we have to be doing both of those at the same time. I don’t think, I don’t know of anybody, I’m sure there are, but from our sort of experience and our domain, you can’t just plug these sort of real estate legal images into a VLM at the moment. You can get some information, but you still need to use sort of classical machine learning techniques. These documents still need to be OCR’d. And so it’s sort of, it’s a combination of the two to almost sense check each other, much like you go to a doctor and then get a second recommendation and sort of.
VLMs have their strength over classical machine learning and classical machine learning has its strength over VLMs and sort of the two together are great. Long term, you can imagine the same thing that happened with sort of LLMs will happen with VLMs, but we’re not quite there today.
Greg Lambert (24:40)
Thanks.
Marlene Gebauer (24:40)
dynamic duo.
Andrew Thompson (24:42)
Yeah, back to that thing.
It’s like everything’s different now. Three, six, nine, twelve months from now, where a VLM is going to be. Part of our job is trying to figure out and squint and go, I think it’s going to be better or it’s stagnating. And I guess that’s part of the challenge, but also the fun of this game at the moment of sort of building software in this industry is you’re not entirely sure all the time where, like how fast the world is going to move in what trajectory.
Greg Lambert (25:06)
Yeah, well even so, Andrew, in that conversation, you were still talking about the need for OCR. You were still talking about the need for machine learning. So I mean, it’s almost like we haven’t really gotten away to where one tool runs at all. You still need to know this whole kind of toolbox of different tools and which one to use at the right time.
Marlene Gebauer (25:28)
Well, talking about moving quickly, um, at, the 2025 AI engineer world’s fair. know, now I’m not now I know that there’s an engineer’s world’s fair. Um, you introduced a concept called the prompt tax, which is the hidden cost of, staying on the bleeding edge while you have the foundational models are constantly upgrading and, know, and breaking existing workflows. So.
Andrew Thompson (25:40)
Yeah.
Marlene Gebauer (25:57)
Your mantra of betting on the model, know, building for where the tech is going, you know, not where it is today. How do you use that? How do you practically manage like, let’s say like a prompt library that might become obsolete every six months, you know, without turning your product into, you know, a, I think the quote was like a fragile mesh mesh mess. I can’t talk.
Greg Lambert (26:15)
Six weeks.
Andrew Thompson (26:17)
Yeah.
Great point. that AI engineering world’s fair is an absolute gold mine for folks in our industry. It’s sort of people at the bleeding edge all staring at similar problems in different directions going, how do we grapple with what’s happening? And you learn so much from sort of people right at the bleeding edge. So I’d recommend that for some of your viewers if they’re not familiar. But to your question, let me separate two things. I think there’s like teaching an AI system
how to be a real estate lawyer. And then there’s all these other things like people have called it prompt engineering or prompt tax, or sort of like overfitting your AI to a given sort of model that happens to have come out maybe six months ago or three months ago. And so let’s sort of talk about the first one, teaching an AI to think like a real estate lawyer. What we’ve discovered over time, if you get that wrong, you specify it so specifically that each new model, you have to change a huge amount.
versus if you find the right abstraction level, and again I’m going to sort anthropomorphize a second back to what we said earlier, you’ve got a really intelligent human, they go to university, and they sort of learn some of the theory and the fundamentals. It’s sort of that level of abstraction when we talk about kind of how we’re writing prompts to sort of teach an AI system how to be a real estate lawyer. That’s our kind of proprietary knowledge that we have real estate lawyers on our team in the US, in the UK.
sort of adding incremental value or sort of sweeping amounts of value on how to do various workflows. So things like, you we talked earlier about legal descriptions. How exactly do, you know, where do you find those in the real estate documents? What happens when you have two legal descriptions that don’t match related to the same property? How do you extract those? How do you plot them? What do you do when the boundary, you know, doesn’t match up? And so all of that information isn’t lost. And I think what…
is also helpful as these models have gotten better and better, used to be if you, know, that talk came up last year and sort of the world moves pretty fast, I was, a little bit of a worry that as our prompt library got bigger and bigger, it would just sort of more and more work to sort of prune that every time a new model came out. But as models are coming out, they’re getting better to handle all of these things. And so because we have what a real estate lawyer is at a sort of at a sufficient abstraction level, that
stays around for all of time and it sort of continues to compound and be valuable for our product. The thing that I think has very quickly become irrelevant or no longer needed or just sort of the models don’t even care about anymore is what people call prompt engineering or prompt tax. There were all these papers coming out that, you if you told the model that it was super important to your career and it couldn’t get this wrong, hallucinations rate would go down by a few percentage.
We tried all of that, we saw the results, and was like, that’s great. A lot of that stuff is just melting away into nothing, and you no longer have to do that. I think a lot of that knowledge that was found out is now baked into the AI model. We keep talking about it getting better and better. And so that was never, that was a short term fix for a problem that has now disappeared. And so sort of back then when I created that presentation, I was a little worried. What happens when our prompt library is double the size, 10 times as big? I’m actually no longer worried.
partly for those reasons, but also partly we can take our prompt library, feed it to these models, and just iteratively go on and say, this is what we’re finding from one model to the next. Help us update all of these prompt libraries. And a lot of this work, even internally for us, is getting automated.
Greg Lambert (29:42)
You can tell me I can stop lying to my AI tool to tell it I’m going to give it a $150 tip if it gets it right. Good.
Marlene Gebauer (29:46)
you
Andrew Thompson (29:50)
I would need to see the data, but probably
Marlene Gebauer (29:53)
Hahaha
Andrew Thompson (29:53)
so. Or at least the days are quite numbered on needing to do that anymore.
Greg Lambert (29:58)
Yeah, I’m always worried that its memory will remember that I’ve offered to give it money and all of a sudden it has access to my account. Yeah. So Orbital recently opened up some US offices. I know you’ve got New York and I know you’re looking at…
Marlene Gebauer (30:03)
Follows up, yeah.
Andrew Thompson (30:04)
Exactly, it’ll extort you later.
Marlene Gebauer (30:08)
All of sudden a guy comes to visit you, you know.
Greg Lambert (30:18)
Chicago and Austin as possible following the $60 million Series B round. And you’ve noted that the U.S. real estate legal services market is like $140 billion opportunity that still has, and again, we’ve seen it, it’s very dramatically under-automated in the processes that it does. So as you move into the U.S. market, what are the biggest differences you’re seeing
and how say the AmLaw 100 firms approach AI compared to what you were seeing in the Magic Circle firms back in the UK.
Andrew Thompson (30:57)
Yeah, absolutely. think that’s a great question. Just to reframe that sort of slightly, we’ve definitely fully moved in. We’re in the US now. We’ve got the majority of sort of the top 20 real estate practices using Orbital. I’ve mentioned some of the names before, like Seyfarth Shaw, BCLP, Vinson & Elkins, Goodwin, Polsinelli. It was always sort of a, you I’ve been in a lot of businesses and you sort of take your product to a different country, especially in a regulated industry like real estate legal.
And you’re like, how much of my product needs to be completely rebuilt from the ground up versus how much of it is applicable? And we found that sort of, you know, not everywhere, but most of the differences haven’t sort of fundamentally stopped or sort of changed our core value proposition to real estate lawyers. They want things to be, you know, speed of response to clients. They want the quality to be as high as it could possibly be. They want a second pair of eyes on everything. And they want to ultimately sort of reduce the risk.
that they have. so that sort of those fundamentals have been the same, the products are built around those. And so whether it’s sort of, you know, we started in the UK, we’ve now gone to the US, that’s our biggest market, it’s sort of, you know, those differences aren’t sort of, they’re slight. But if I had to sort of tease out a few differences, one of them is InfoSec. You know, ISO 27001, you know, much more prevalent here in the UK versus SOC 2.
and data residency. Obviously, if you have a service that’s sort of built in the UK and it stores data there and there was GDPR, now in the US, there’s sort of different requirements. So obviously, infosec requirements need to be different. The billable hour, lot more prevalence in the US than in the UK. I think the UK has pushed really hard on this. There’s a lot of fixed fee work, especially in real estate. And so we sort of did see a little bit of a difference there.
And then US law firms have initially been a little bit more sort of conservative. Obviously the ABA ruling that came out on AI a while back, law firms had to react to that in the US. so UK law firms were sort of a little bit quicker off the blocks to sort of adopt that and sort of ⁓ use the product.
But I think a lot of that has been ironed out now. A lot of large US law firms have committees to handle that. So I think that’s becoming ⁓ less of a difference between the two regions. And then I think the big one is, I mentioned it earlier, title and survey review. That’s a very US-centric piece. You get a title commitment policy. There’s often sometimes hundreds of exceptions and linked documents that you need to download and review. And then part of this sort of
geospatial of visualizing the property is sort of heavily sort of tied into how real estate lawyers work with title companies. And so we sort of built out that offering more and more and it’s become sort of a leading product in the US for what it does related to that. So I’d say that’s probably one of the biggest differences.
Marlene Gebauer (33:46)
So one of the things we talk about on the podcast a lot is how AI is going to be affecting sort of up and coming people in the workforce. we enjoyed reading about your experience teaching a class of 30, 11 year olds about AI at your son’s school. And you mentioned that they have some really surprisingly deep questions like whether someone is actually coding you.
So what did that experience teach you about the future of work and creativity? And how should the geeks in the industry currently leading be prepared for the next generation of the AI native workforce?
Andrew Thompson (34:23)
Yeah, love this question. That was two years ago when my son was 11. I literally just celebrated his birthday a little week early this weekend and he’s now 13. And like two years in this AI world just feels like a lifetime ago. I went back and sort of quickly looked at that presentation was like, wow, okay, things have changed. So sort of, I’ve had to update a few of my priors, but I’ll sort of pull out maybe three things from this.
Greg Lambert (34:28)
God.
Andrew Thompson (34:49)
As per usual, children often mimic their parents, especially at that sort of young age. And I think their parents’ attitude to sort AI is often sort of channeled through them. And you can kind of sort of picked up, you know, when I was in that class and when I even have now where I’ve done sort of subsequent presentations or sort of chatted to my son’s sort of friends. And I think maybe the piece to just think about your parents, like we all sometimes fear the unknown.
There’s this big thing that’s happening. It’s really exciting. There’s lots of opportunity, but there’s also lots of change afoot. And you can see, you know, almost when I’m chatting to my son’s friends, you can see which ones almost their parents are a little bit more on the more paranoid side versus which ones see it more as an opportunity. And this sort of dovetails into the second piece that adults have a huge amount of baggage when it comes to like work had to be done this way.
You know, I’ve got a lot of my career and reputation built on doing something in a bit more of a manual way. Children don’t have any of that, right? They don’t have that baggage. And so it’s really sort of, they get to come into age in a world where all of that is just, doesn’t matter. Kind of like the internet. Imagine, you know, I grew up prior to the internet and then after, and I look at my son, he knows nothing different from that. And I think it almost puts them at a level at a starting point ahead. Imagine if you’re running a race.
My son’s like 100 meters ahead of me immediately coming out of the gate. He can code up things now that I wasn’t even doing in university. And he was doing that when he was 11. And so the starting point, it’s just so great to see where they are at. And I think my son, it enables, had his, for his birthday, had one of his mates over to play around. And he’s coded up a game. They play football in the attic. There’s like flicking these little football players and they’re knocking balls. And he’s created a little app.
that sort of mimics commentary and it goes off to 11 labs with some text and comes back and it’s like, the rain is thundering down. And it’s just like, I walk up there and the boys, I have to like go and tell them to go to bed, you know, it’s 10 PM at night. And they are having the time of their life. And again, that’s play and there’s work. But it’s really interesting to see, you know, I get the luxury to have a foot in what’s happening with software engineering and that whole industry is sort of changing. I get to see what’s happening in real estate legal and that whole industry is changing.
And then I get to see what my son is doing and he’s just loving it. He does not seem held back at all. He’s just, you know, there’s nothing that he feels he can’t do. He’s constantly thinking about entrepreneurialism and then sort of one last piece to end with. He is constantly running out of tokens. know, we no longer talk about this, like there was the AI bubble just, you know, a few sort of months back.
Greg Lambert (37:19)
I was gonna ask. It sounds expensive.
Marlene Gebauer (37:19)
Hahaha.
Andrew Thompson (37:27)
But when I look at my son, I pay, what is it, $20 a month for his plan. I feel like the $200 plan is probably a little bit too much for a 12-year-old. But he’s constantly running out. I thought, geez, imagine he’s probably in the top 1 % of kids who are doing it, because obviously I’m in AI. What happens if 100 % of kids are using this? We just don’t have enough tokens to go around. And I guess to me, that’s really inspiring to see sort of…
kids sort of using it and you can tell that by the time he goes to university and he gets into the job he’ll just be so AI native he’ll understand the world in a way that most of us sort of are still kind of grappling with whereas he doesn’t have any of that.
Greg Lambert (38:07)
Andrew, you talked a lot about, and I’ll frame it ⁓ as the Wayne Gretzky quote of skating to where the puck’s going to be rather than where it is. When you talk about…
Andrew Thompson (38:15)
Yes.
Greg Lambert (38:20)
As you’re building, you’re looking at where the foundational models and the other advancements are going to be in six months rather than where they are right now. And that really takes a lot of knowing the industry, knowing what’s coming in the industry. So I wanted to ask you, what is it that you do personally to kind of keep up with things? Is there one or two must read or must listen to resources?
that help you kind of predict where things are going to be in six months.
Andrew Thompson (38:52)
Yeah, absolutely. It’s like, I feel like I’m just have a ⁓ fire hose connected to my brain and I have to just sort of do filtering of like what’s important, what’s not. And that’s obviously important to my job, but it does feel like…
Greg Lambert (39:04)
But just
get the AI to summarize everything and then inject it straight into your brain. There we go.
Marlene Gebauer (39:09)
All
Andrew Thompson (39:10)
You know what, there’s actually some truth to that, to be fair. I think my latest thing that I love, getting up on Saturday morning, grabbing a coffee, Harry Stebbings of 20VC here in the UK, but he’s sort of very global focused and obviously a huge amount of the development is sort of happening in AI. He has a chap, he obviously interviews lots of sort of one-off people, but there’s a recurring podcast he has between him, Rory O’Driscoll, who’s a VC, and Jason Lemkin, who’s sort of ex-founder and VC.
And I just love having them, three of them, they’re all coming from very different vantage points and that sort of open, healthy debate. One person says, this is brilliant, or this is the end of the world, or this is great. And they’re constantly sort of chiming in, but it’s not, it’s less political. They actually have kind of a thing of like, we don’t talk politics, let’s just talk sort of tech and what this means for the average developer, the average investor. That I find is both entertaining, but also really good weekly insight into sort of what’s happening. And they discuss a lot of…
You know, even the topic that’s sort of the theme of this chat. I think the other piece is X is a fantastic place where a lot of people who are right at the forefront of AI are kind of putting out opinionated takes, people are commenting, and there’s a lot of back and forth. And you sort of get the first bit of the fire hose seems to be on X. And I think, least for sort of this audience, Aaron Levie the CEO of Box.
He takes some incredible insights and distills them down into sort of easy to read paragraphs that really tell you what’s happening in the world and how software engineering is changing, what CIOs are looking at. And again, for this audience, if you want to see where the bleeding edge is at, look at software engineering. Not just because software engineers like to adopt things, but the training data. There’s more code in the world that has sort of been ingested in that you can see how far ahead it is doing that. And then the last person that really fits into this,
Boris Cherny who created Claude Code, he’s recently got onto X. And if there was ever somebody to really just sort of get inside his head to understand what you’ve built, what’s happening, where is the world going next, he’s a fantastic resource who sort of posts on X on a sort of daily weekly basis. And I sort of read everything he’s got to kind of get a little bit of a sniff test as to sort of where the world is going.
Greg Lambert (41:22)
Thanks.
Marlene, you’re muted.
Marlene Gebauer (41:24)
Sorry about that. we have come to the time in the podcast where we’re to do the crystal ball question, Andrew. So, yeah. So looking ahead to three to five years when every real estate lawyer has, has a bunch of, of proactive AI agents at their disposal. what’s the single single biggest shift you see coming for the traditional billable hour and the way property professionals actually derive value from their labor.
Andrew Thompson (41:51)
Hmm. Another one of these questions. This is the question a lot of people, I was at Legal Week, a month or so ago, and this was one of the big questions around the billable hour and things. But maybe to start off with this time frame, I think three to five years is, like, it’s coming sooner. This idea of of humans being able to…
⁓ orchestrate a huge amount of AI agents to do far more work than was possible. I’m already seeing it with engineers on my team, back to this point around, look at what software engineers are doing, and probably that’s coming for lots of other industries. And so, kind of the way I think about this, raw cognitive ability, so sort of intelligence, I know AI is sort of more intelligent than us in certain things, less in other things, but sort of, if you had to sort of, you know, average raw cognitive ability,
sort of knowledge of the market, and this is sort of particularly interesting in sort of legal, depending on where the data comes from. And then just the speed at which you can perform work, whether it’s writing code or whether it’s reviewing a 200 page lease, all of those things are getting democratized at an incredible clip. METR M-E-T-R, is a research group that shows how this is sort of the exponential increase in terms of how much work
can the equivalent AI system do that a human took N hours? And it was, when it started out, it was like minutes and then it was half an hour. And it’s now sort of, I believe, you know, in the kind of more than half a day to more than a day worth of work and Opus, you know, 4.7 has just come out. And so if we look at it a world where all those things get democratized, I think, you know, lawyers have always played in a world where they’re competing with other clever, knowledgeable,
and sort of foster humans or teams to give their clients sort of the best possible service. And I think now we’ve just layered in this new thing that sort of takes some of the things humans can do and it is much better than them, but it’s also sort of not as good in other things. So I don’t have like one silver bullet thing to sort of answer here, but the things I have heard, at least I can, talking to our customers, being at sort of legal week, human empathy seems to come up more and more. is clearly not gonna be, it can mimic empathy.
but it doesn’t actually have true empathy. Being accountable, again, really important in a sort of regulated domain when there’s insurance involved. As of yet, most AI systems can’t be regulated, they’re not insured, even if they are. Let’s take planes, for example. I was originally a pilot and wanted to be. Boeing 747s can fly themself, if you program them. But I think I don’t wanna put myself, even though I know logically, I don’t think I wanna put myself and my family on one of those planes.
without a human pilot in the seat. Maybe that will change in the future, but I think having humans being accountable is important. And then this sort of third one, this idea of sort of instructing or orchestrating AI. And I’ve sort of highlighted this of, know, previously if I’m a manager, I orchestrate humans and sort of my value and what I’m adding is how effectively do I orchestrate humans. Now I have kind of an extra tool in my toolbox. I’m orchestrating humans, but I’m also orchestrating tokens.
And there’s this real interesting interplay between the two. And so I can imagine, again, I don’t know what’s gonna happen with the billable hour. There was so much debate at Legal Week. Why did it exist to begin with? Clients wanted it, that’s why we brought it in. Now clients are asking for take it away. And it’s been incredibly persistent through year after year after year of people calling that the billable hour is going down. There is an inherent conflict with AI because it is making the work more efficient. But I think, you know,
humans will be valuable to, you in what way. I think we’re all figuring that out and I’ve given sort of a couple of ideas as to sort of maybe where that is. But I think this is an ongoing debate that we’re all having and I think we’ll figure it out. It’s just a sort of work in progress as we go.
Greg Lambert (45:38)
All right, well, Andrew Thompson, CTO there at Orbital. Thank you very much for coming in and I really appreciate you taking the side trip to talk VLMs, LLMs with me. I appreciate that.
Andrew Thompson (45:49)
It’s a pleasure.
Marlene Gebauer (45:51)
Yeah, thank you, Andrew. And thanks to you, our listeners, for listening to the Geek in Review podcast. If you enjoyed the show, please share it with a colleague. We’d love to hear from you on LinkedIn and Substack.
Greg Lambert (46:02)
And Andrew, for our listeners who want to follow your engineering mantras or learn more about Orbital, where’s the best place for them to ⁓ connect with you?
Andrew Thompson (46:12)
Yeah, great. Head over to orbital.tech. That’s our website and the tech blog is linked from there. We’ve got lots of content on the mantras and everything else and some of the sort of interesting things we’ve been working on over the years and some of the really interesting things that are probably going to drop fairly soon.
Marlene Gebauer (46:28)
And as always, the music you hear is from Jerry David DeSica. Thank you, Jerry, and goodbye, everybody.
