This week we sit down with Sean West—co-founder of Hence Technologies and author of Unruly: Fighting Back When Politics and Law Upend the Rules of Business. Together, they explore the shifting fault lines where law, technology, and geopolitics collide. From the growing reliance on generative AI in legal work to the erosion of rule of law and the emerging threats (and opportunities) facing knowledge workers, Sean offers a strikingly global—and at times unsettling—view of the legal profession’s next frontier.

The conversation kicks off with a discussion on the Law360 survey showing that 62% of lawyers are using ChatGPT in some aspect of their work. Sean explains the popularity of general-purpose AI tools over legal-specific ones as a matter of price, accessibility, and perceived innovation. While lawyers trust themselves to edit AI outputs, Sean warns that this passive use of AI could slowly and invisibly displace traditional legal roles, without firms consciously realizing what’s been lost.

The discussion deepens as Sean introduces the idea of passive job displacement—where tasks once assigned to junior lawyers, interns, or external vendors are quietly absorbed by AI tools. He likens it to carrying “a quarter of a human brain in your pocket” for $20 a month. What starts as convenience becomes infrastructure, and over time, demand for human input declines. He also questions the long-term viability of legal tech products that can’t clearly outperform generalist AIs like ChatGPT or Claude.

Sean then draws on his geopolitical expertise to underscore the urgent need for situational awareness in law firms and businesses alike. He explains how political volatility—from China and Taiwan to Europe’s regulatory tactics—can suddenly reshape the legal landscape. Rather than relying on traditional prediction models or complex advisory plans that get shelved, Sean emphasizes proactive legal scenario planning. His new product, Hence Global, offers a “geo-legal” lens on global news, customized for specific legal practice areas to help firms act instead of react.

We push further into the implications of “front-stabbing” politics, where once-hidden power plays are now openly transactional. Sean describes a world where AI-driven lobbying, mass arbitration spam, and “robot lawyers” can reshape public policy or flood companies with legal claims at scale. He argues that when the rules are ambiguous, large players will push boundaries—and smaller players may get squeezed out. In a world without a clear referee, the game favors those who can afford better tools and faster moves.

Finally, Sean challenges legal and corporate leaders to stop avoiding the hard conversations. Whether embracing AI to boost productivity or choosing to protect jobs, organizations must be transparent. “Let’s front-stab about it,” he says. Make your commitments public—whether you’re retraining your workforce or doubling down on AI-driven efficiency. Because in a world where legal, political, and technological lines blur, silence isn’t just unhelpful—it’s a risk.

Links and Mentions:


Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Blue Sky: ⁠@geeklawblog.com⁠ ⁠@marlgeb⁠
⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript

Marlene Gebauer 0:07
Welcome to the geek in review, podcast focused on innovative and creative ideas in the legal industry. I’m Marlene Gebauer.

Greg Lambert 0:14
And I’m Greg Lambert.

Marlene Gebauer 0:15
And today we welcome Shawn West, co founder of Hence Technologies and author of a new book, unruly, fighting back when politics and law up end the rules of business. Shawn, welcome to the geek in review. Very excited to have you here.

Sean West 0:28
Well, I’m super excited. It’s such a pleasure to be here. It’s been on the bucket list. Yeah.

Greg Lambert 0:32
And for those of you on YouTube, you can see Marlene and I are actually in the same room together, sharing the same microphone. So get ready. Two people enter the octagon, but only one will leave. All right. So Sean wanted to talk with you. One of the things that I just saw that crossed my desk this morning was this survey that Law360 did that showed where, like, 62% of lawyers surveyed and, and I think the metrics were 400 attorneys from large, us, large law firms, but 62% of them say that they use ChatGPT for their, you know, basically their legal work, you know. And then it kind of drops off.

Marlene Gebauer 1:20
What that means, like, their legal work. Like, what are they? What constitutes legal work?

Greg Lambert 1:25
Yeah, and I’m thinking, just to give just a little bit of background on this, that it’s the kind of the normal day to day stuff. So I don’t, I don’t think it’s the, you know, their client stuff. I think it’s kind of like, help me kind of phrase this, right? Or do some research on this, but it’s a, I mean, it’s 62% use ChatGPT, and then it drops all the way down to like 28% for Lexis Protege, which, I’m thinking that number may be high.

Marlene Gebauer 1:56
Yeah. She’s, of course, geared directly to legal,

Greg Lambert 1:59
yeah. And then the next one is Microsoft co pilot, and then the next one after that’s Google Gemini so. And then 17% use CO counsel, and 5% use Harvey. So I’m just kind of curious, what’s your take with your experience about lawyers using commercial tools that are not legal focused, like ChatGPT.

Sean West 2:22
Well, for me, I think it, I think there are a couple of pretty straightforward explanations for it, right? One is, everyone’s just decided that they’re going to buy their own subscriptions to these AI platforms and use them and not go through central procurement, not go through central it. They’re going to use it on an iPad sitting next to their laptop. I mean, you see research about that. And so I think number one, you’re not disintermediated by the system saying yes or no to using these tools. I think second, the price point is the reason for that, right? When it’s $20 a month, it’s like a couple of pumpkin spice lattes or whatever you drink, like it costs nothing to have it if it’s useful. Once you don’t think that hard about the subscription that you have, you also get to feel like you’re on the cutting edge, because they’re constantly pushing out new features and models. So I think a lot of people have taken the jump on it while their organizations are sitting back, saying, we need to assess all these other tools and put them through the ringer. And by the way, those tools are actually kind of expensive, and so are we going to get the ROI and whatnot? Everyone’s just said, look, I can have, you know, a quarter of a human brain in my pocket, or whatever it is. It’s not full AGI yet, but I’ve got some amount of human capability in my pocket. Cost me nothing. I’m going to use that now. I think lawyers are, they’re both smart enough to understand the limitations of it, but also their use case means that they’re they’re probably willing to accept some amount of risk because they trust themselves to catch the errors. We know AI is smart enough to pass the bar, but dumb enough to invent cases, and that’s like kind of a problematic dynamic. If you had an intern that that was the case with, they’re probably not getting a full time job. I think lawyers who view themselves as editors right, are willing to put things in and see what comes out, and then decide what to keep and what not to keep, and move on with it. They are. There also may be some polling bias here too, where they don’t want to name a particular tech product that they like, and so everyone just says, ChatGPT when they’re kind of overstating how much they’re actually using it for legal work, because they probably are not using it as much as, as much as this says, Well,

Greg Lambert 4:23
yeah. And then the other, the other thing is, I had people that immediately responded was like, Well, why are they using ChatGPT? Claude is so much better. I’m like, Yes, to most people, ChatGPT is the catch all.

Marlene Gebauer 4:34
See, that’s what I was wondering. Like, I don’t know what the pool was here, but, you know, there’s, there’s something to be said for branding. You know, it’s like, everybody knows ChatGPT. They’ve heard that in the news multiple times, maybe not so much as some of the other, more you know, common products. And I also like, it’s interesting both with Microsoft co pilot, it’s interesting with like Microsoft co pilot, ChatGPT and Harvey, because it’s like, I. I mean, I just know that Harvey has pen has great penetration in large law, but yet that’s really narrow, whereas, you know, Microsoft co pilot, again, I don’t know that many firms are using it yet. Maybe they are, but that is interesting in terms of the amount of people using that as compared to chat GPT, you would figure they did have access. They

Greg Lambert 5:23
use that, or, I wonder a lot more. That’s a catch all for being, yeah, that’s what I’m

Marlene Gebauer 5:27
wondering. If it’s being like, if that’s what they mean, yeah,

Sean West 5:30
if they’re organizations, teams, organization, a Microsoft organization, and it’s been enabled, then, of course, they’re using it. It’s been handed to it’s been handed to that, right? It’s

Greg Lambert 5:38
interesting one. I think this, this kind of reminds me of that argument where, as law librarians, there was a period of time where we’re like, you know, Oh, please, please don’t Google this. We pay millions of dollars for sophisticated, sophisticated legal research tools. And quite frankly, there’s times where it’s, it’s, fine. It’s much better to use Google than it is to use Westlaw or Lexis. And so it just depends on what the issue is.

Sean West 6:08
We need to start thinking about the passive replacements of jobs through the use of these tools that like people don’t understand how reliant they are in these tools, and they don’t understand the work they’re not doing because they’re using these tools. So I always, I give a keynote now, and it has, like, one bold sense, and an AI generated image next to the bold sense. And I flip through like, 50, 6070, slides, and I always pause on a slide about, like, tech and its implications for the economy, because if you stop and reflect, like, that’s a whole bunch of software licenses I didn’t buy, right? I paid for one thing. I paid for ChatGPT. That’s a designer I didn’t pay to design my slides. It’s an intern who’s not running between me and the designer trying to figure out what I’m trying to say. That’s literally like me and a large language model, eating jobs for some fraction of $20 because it didn’t take me a whole month to do it. And I think we can spend we can get really deep in the conversation about whether AI will eventually replace high level legal work or typical legal work, and then many people have discussed that, but I think it’s this passive willingness to take that marginal query to an LLM instead of to a human, that eventually we wake up and there’s less work for humans to do without us realizing that it’s not necessarily a decision to offload it to AI. It’s the fact that it’s available and you you use it that way. You know Are are you going to chat GPT for recipes instead of buying a recipe book today? Like eventually, we’re probably going to realize that people are doing that. It was never an active decision by society that Chachi BT deserves three Michelin stars. It’s just simple. It’s behavior. It’s people trying it before they know it. They don’t need the other product

Greg Lambert 7:42
anymore. And the flip side to that is, instead of doing a presentation where you have 50 or 100 slides, you do a presentation where you do four slides. And because you’re not, you know, if you’re like me, you weren’t going to go pay for somebody anyway, you were just going to do less. And so this just enables me to take the same amount of time and content and create more information than I can put in that time and content period, because I’m too cheap. I wasn’t going to pay for somebody to do that.

Sean West 8:13
Yeah, look, I think it’s all, it’s all different use cases, but it is pretty stunning just how much work these LLMs can do. Because I think, I think what you’re highlighting with the data that you’re showing, and I say this as a legal technology entrepreneur who builds apps and is regularly asked, What does this do that ChatGPT can’t do right? Like, that’s just a common like, knee jerk reaction from anyone looking at, why do we, why do we need this technology? Is like, Why can’t Can I do this? The reality is, these LLMs can do a ton. So when you’re starting to see data that people are relying on it to do a ton, it also can make you question valuations for some of these, you know, tools that are competing with what a generalist AI can do. As fun as it is for lawyers to direct ChatGPT to do legal work, I don’t think that that’s why Sam Altman has built ChatGPT. I don’t think that that’s a case that open AI is like particularly interested in in owning right like, do they want to own the legal space? I do think for companies like Microsoft that have owned the enterprise legal is a very easy adjacency for them to try and dominate. And I also think that when big companies like that get interested in it, and when lawyers start using it, then you get constituencies that want to change the regulations that prevent AI from actually practicing law. And that’s where, you know, for me, and a big crux of the book, which I’m sure we’ll get into, is about this intersection between politics technology and law, like when a lot of people use a certain technology, it creates a political momentum to allow them to use it for more things, and I think it’s eventually gonna lead us down the path of practicing law, and the and the rules will enable that, because the constituencies that want it will be different than in the past. And I think that’s a different world we’re headed into for legal work

Marlene Gebauer 9:55
Sean, we’re definitely going to get into the book and talk about the this intersection. You’re talking about. Just tell us a little bit sort about you and sort of how you got into this area.

Sean West 10:06
Yeah, for sure. So I’m co founder of Hence technology is a software company based in the US, the UK. We have people in the Netherlands, but primarily a based in Rwanda, of engineers that build software, both for legal, tech, for enterprises and we build, we have a new product, which we’ll talk about, that does effectively geopolitical and geo legal analysis, which I’m pretty excited about. I spend most of my career as a political forecaster. I was Deputy CEO of a company called Eurasia group, a prominent geopolitical forecasting company. And when I left, I left because I felt like the advisory world was going to get disrupted by technology, and I wanted to be a disruptor. And so I set up Hence with my co founders, and we started off working on outside counsel management, because we felt like the purchasing of advice generally, not just in legal, was like a pretty interesting space. And legal raised their hand and said, we have this problem. We spent in house, like, we spend a lot of money on legal advice. We don’t know if it’s good or bad. Can you solve that? And so we started working on it. I suddenly found myself a different legal conference every month. And I was walking around all these legal conferences. And you know, before the generative AI, like zeitgeist, in the end of 2022 like these conferences talked a lot about digital transformation and productivity enhancement, but suddenly everybody jumped to look at what AI can do, and let’s use it. And so you can’t go to legal conference. Where less than 60% of the agenda is about AI. Today, it’s probably like 90% of the agenda. And I sort of stepped back and I thought, with my political hat on, like, this is lawyers talking to lawyers about lawyering with AI, but nobody’s actually answering the question about, like, number one, what’s going to be allowed from a regulation point of view, not like regulation legal profession, but like regulation of AI. Number one. Number two, like, it’s great to make ourselves really productive, but what if we’re super productive as the world melts down? Right? What if we’re in the Presidential Suite on the Titanic? Like, that’s kind of problematic, and nobody’s sitting here doing speeches and talking about where the world is going, and the world is being impacted, not just from individual politicians, which we can discuss, if you want, but really from a decline in rule of law globally, right? There are two thirds of the world has never really had access to justice. So to say that there’s a rule of law recession, sort of, I think, glazes over the fact that, like in most of the world, there’s not necessarily rule of law, and most people don’t have access to their justice system, but in the Western world, where we think we do, we are seeing very marked reductions in rule of law and access to justice. And that got me thinking a lot, that not only is the legal community missing out on what’s happening politically, but the political community also rarely thinks about the role that law and lawyers play in that process. They tend to think lawmakers are called lawmakers, so they make the law, and that’s kind of like the end of it, when, in reality, we live in a world where the law is being weaponized. When the political process fails, the only route you can go is to challenge things through the court. And so for me, kind of the origin of all this was like looking at the fact that I felt like there was a constituency in the legal community for understanding global events, geopolitical events, etc. And, you know, I wanted to test it out, so I set up a sub stack, and immediately got all this feedback. You know, initially it was the subtitle of it, the sub sax, Geo, legal.substack.com, but the subtitle was helping lawyers look around the corner before it’s too late. And I think a bunch of lawyers opted in very quickly. We’re like, Yeah, I’d be happy to look around the corner. I suddenly said, Okay, we’re on to something here, right? Like, the legal community is not getting this from other sources because lawyers are like, they don’t want to comment on politics. They’ll tell you about a rule change, but they don’t really want to talk about it. So it becomes, like, a little edgy to actually have a view on where politics is going, and there was a space for that. So that’s been, you know, that’s led us down the path where we’ve actually built product against that because of the interest, not It’s not like we did content marketing because we had a product that did this. It’s like, oh, all these lawyers are saying, Man, I can’t do horizon scanning. I can’t keep track of what’s going on the world. My Google Alerts are so much that I ignore all of them. Kind of like, okay, there’s actually space to solve this problem for the legal community too.

Greg Lambert 14:09
Yeah. I have a hard time looking at my Google Alerts and Google News. Yeah. So, I mean, a few weeks ago, we had the vice president going over in Europe and talking about, in talking at an AI safety conference about basically taking all the guard rails off and holding back on on regulations, and letting the companies, kind of, you know, shoot forward. On top of that, with all the stuff you talk about on your sub stack, with the Ukraine issue, the Middle East issue, the NATO PAC tariffs that are going on. What’s going to happen with supply chains, all of this stuff going on. So how do you even direct strategies for these tech companies to. Look at how they need to operate and what kind of resilience and risk mitigation and business continuity. How do you even like try to throw a lasso around that? Yeah,

Sean West 15:11
so I So there, there are a couple ways to do this, right? If you’re doing this with humans, typically, you’d start by trying to figure out what are your main fail points and what you know, how are they politically exposed? And you build scenarios around that, the problem is that that ends up being like a really nice exercise that ends up in a folder that goes on someone’s shelf, and then everybody goes back to their real work, right? Because it’s very hard to monitor it. Or you can bring in a whole bunch of like corporate consultants who say, geo political risk is this new thing. You should build resilience. And I wrote a piece a couple weeks ago, like, there’s no resilience button. There’s no like the easy button at Staples, like, can’t just hit build resilience. Like, building resilience is a whole lot of different things, because the world is increasingly complex today. So the way I think about it really is starting to ask, particularly from a geo legal intersection of geopolitics and legal perspective, starting to ask the legal questions early around political scenarios. I think that gets you a lot of leverage. So like, if you woke up tomorrow and China declared Taiwan had to follow Mainland China law. Like everyone’s worried about China invading Taiwan or blockading Taiwan, they could just wake up and say you will not be able to do business in China if you don’t honor the fact that Chinese law applies in Taiwan. With your business in Taiwan, they said that, right? Lawyers would scramble and try and figure out, like, what are all the implications of this? Can I employ staff in Taiwan and in China? Do I have to pay them the same? You can do a lot of that stuff in advance, right? Like, these are the same questions, whether they’re asked before the event or after the event. So I think you know, a big lesson from Russia Ukraine, for a lot of companies, was like, we saw Putin amassing troops on the border. We tended to take the position like, he won’t go for it. When he did. We had to scramble and fire sale everything and make sure we were compliant with sanctions. But we saw it coming, right? So a big part of this for me is giving people situational awareness about what is in front of them and what is happening. I think this holy grail of prediction. Like I was a forecaster, I would run around, I would go on TV and say, Who’s going to win the election, what policy is going to pass, etc. It’s not that you can’t do that anymore, but the world’s almost too complex do that. There’s too many moving parts to get that right. And as one AI pioneer told me, like relying on AI for that is a bit challenging, like we can’t even predict whether you want to watch a 10th cat video after you’ve watched nine of them. So like predicting the entire world is a lot for AI. The fact is, you can start to understand what’s going on and react quickly when you see it in front of you. So we can see, for instance, the fracturing of the Trans Atlantic relationship between the US and the EU when it comes to military, that relationship leads to a fracture in foreign policy. If the US is suddenly going to say to Europe, like, Hey, you’re kind of on your own good good luck, right? Like, we’re not going to defend you, well then do we think that Europe is going to be nicer or meaner to US companies? Well, they’re probably going to be meaner to US companies. And if they’re going to be meaner to US companies, what’s like, you know, if it were a video game and Europe was like a character in the video game. What’s their secret weapon like when you enter the cheat code to get the secret weapon? Their secret weapon is massive regulation. They they break out the massive regulation. They direct it at the US. You can foresee a lot of this stuff. So for me, a big part of this is about getting people into a mindset where we can’t take a snapshot of what’s going on in the world. You can’t just look at the world and say, Okay, I did my political analysis of entering this country. Job done. Right. Great. Now you have to refresh this every month, every two months, every six months, the world is too dynamic. And so I think the US is a chaos agent in this world. But there are other chaos agents too, and the way those things interact mean you can’t really anticipate it. You kind of have to be ready to react. And we can talk more about some strategies well.

Greg Lambert 18:45
And before let me, let me just expand on that just a little bit. Because, like, if I tell my kids, if I try to explain what the rule of law means to my kids, I said, you know that a place it adheres to the rule of law if at 230 in the morning, you are at a stop light, and you watch a car pull up, stop and wait, even though there’s nobody else around, they wait until the light turns green and then goes forward, even if they know there’s not anyone watching them. And so I think, and it feels like we’re losing that right now, that it’s the you know, how do you act when nobody’s watching? And you have that comfort of thinking that everyone around you is going to follow the rules, and it feels like that’s somewhat disintegrating, right? Yeah, try and

Marlene Gebauer 19:45
get away with what they can, you know. Well,

Sean West 19:47
so there’s a political answer to that and a technological answer to that, right? And as you know, as I sort of outline, I think these three factors, politics, technology and law, really, they’re all exist on the same plane, and they’re pulling in different directions. I think if you look, pull. Politically, you can blame a lot on the Trump administration for this undermining of rule of law in terms of what’s happened in the last 12 weeks. But you know, this is a trend that I’ve been writing about that we’ve been seeing more broadly in the US. I mean, yes, if people commit felonies and then are just like, massively, mass pardoned, with no regard for whether they cause human injury or something like that, like, that’s probably not good for a belief that the legal system delivers fair outcomes because they are, like being pardoned for political reasons. Or, you know, if the mayor of New York City commits corruption and then decides that, like, he’s suddenly happy to get behind the President’s immigration policy, he’s no longer got to worry about his his court cases like this, that stuff’s probably not good for rule of law. Like, we can all, we can all see that, right? But the weaponization of the legal system to go after Trump, I think it’s fairly mild for, you know, relative to the way that, you know, like Trump just went after Covington and Burling for defending the January six prosecutor, right? Like he’s, you know, I think, like Trump’s doing this in steroids. But you know, the legal system was what was being used to try and slow Trump’s return to Office. Politically, we have to acknowledge that that’s not a good thing for the legal system. That was a tool, maybe a good thing politically, if you’re a Democrat, that’s not a good thing for the legal system. So I think there’s like that political set of answers, but then there’s a technology set of answers, which is that technology companies ignore the law. They do what I call you talked about going, pulling up to the stop sign and stopping. In California, we have something called the California stop, or a California roll for people like sushi, right? You pull up to the stop sign, you look around, and if there’s nobody there and it’s safe, you just keep going. You don’t actually do a full stop. And tech companies have been doing that since inception. And the problem for everybody competing with tech companies is if, if you’re racing them, and you’re stopping at the stop sign, like you’re gonna die, you’re in that race, like you’re not gonna win, right? You’re gonna die. I mean, you’re gonna, you’re not, you’re as a company, you’ll die. You’re not gonna win the race, right? And so I think we’re in a world where people are willing to pay the fine if they get in trouble. I mean, Uber ran a phenomenal strategy doing this against taxi car job that much.

Marlene Gebauer 22:04
Okay? It’s like because, first of all, it’s unlikely you get caught. If you do get caught, it’s not that big a deal financially. You know why? Financial wise,

Greg Lambert 22:14
I’ve got a NASCAR saying that if you ain’t cheating, you ain’t racing

Sean West 22:19
well. And I think AI, you know, tech ends up enabling this a lot more because the rules are gray, right? This isn’t about you’re not necessarily breaking the law if I go about

Marlene Gebauer 22:30
it, you know, exactly. So if I go tomorrow and say it’s very, yeah, you know, unclear.

Sean West 22:35
It’s completely unclear. So if I say tomorrow, like, I’ve built a robot lawyer, right? We saw do not pay get in trouble for saying they built a robot lawyer. But it’s not clear that it was like illegal to build a robot lawyer, right? Like, that’s not it’s not obvious that. That’s clear. There are some rules about there are people who will come after you and threaten you legally if you do it, but it’s not clear that that’s like an illegal thing to do. And I think it’s the same thing with a lot of the AI regulations. Like, if you look at Europe’s AI act, where it’s like, you can only use AI in these particular purposes, and it’s like, okay, cool. Well, it’s gonna take you five years to implement that. So how much money can I make in five years by bypassing that? And then when you find me, like, I’ll pay you or I will have gone bankrupt. So I think there’s, I’m, you know, I’m not advocating law breaking, right? But I think that that mindset undermines rule of law too. And I think, you know, frankly, I write about this in the book, we’re experiencing this on a on a local level as well. I mean, I don’t know how it is in Houston, but in Los Angeles or New York, if I want to go get toothpaste at CVS or target, it’s in a glass box. I have to ring a bell and somebody has to unlock my toothpaste for me, as if I’m buying jewelry, right? That’s because people are shoplifting and the law is not being enforced. And so you have another explanation for the laws not being enforced for a variety of reasons, some good, some bad when it comes to shoplifting, but it further reinforces the sense that the law is not to be respected, and that actually the company can’t assume the law will be implemented, they have to build a glass case. They lose sales. I mean, I have friends who tell me, I walk into Target and if I ring the bell, if they don’t come fast enough, I order it on Amazon, that’s what they tell me, and then, like, I’ll just get it tomorrow, because I don’t want to wait here for 12 minutes, right? And I think they’re losing sales because the government’s not enforcing shoplifting law as an example. So we’re experiencing this from a variety of different directions, but that erosion and rule of laws is pretty significant, and it’s it’s rising to the constitutional level here, where the President is issuing executive orders that don’t have any you know, Congress is basically saying, you know, we’ll look the other way Do what you want. I’m like, that’s fine. Or maybe they’re okay with it. Maybe they’re not, but, like, they’re not. Congress is not that involved, so the only vehicle is to go to court, and suddenly, the court, you know, you have judicial law making, because the court has to decide, Trump’s issued 1000 executive orders, and courts have to piece through all of them like that’s not really the way that the political

Marlene Gebauer 24:54
they are not prepared. I mean, they’ve been overwhelmed with litigation for years, and. Now this on top of that, like they’re not able to handle it in a timely fashion, exactly. Yeah, well, speaking of that, and sort of not playing by the rules. So you know, you’ve written that the intersection of legal and tech, some things are going to begin to manifest themselves, maybe as early as this year. So large volumes of automated lawsuits, deep fakes, and I’ll say this is kind of against trend, because most people are not saying this, but the loss of jobs for knowledge workers, you shut your mind. I know. I know. So can you elaborate on how and why that’s going to happen. And, you know, are there any opportunities? Is there, is there a silver lining that comes out of this?

Sean West 25:46
Sure. Well, I’m happy to talk about the loss of jobs for knowledge workers and lawyers, because we’re on a podcast, and I don’t have to worry about getting to the exit of the conference room. But, you know, let’s start with this, this broader idea about the weaponization of AI in the legal system. So I had a piece in the Harvard Business Review with my co founder, Steve Heitkamp, talking about how legal risk is going to look more like cyber risk in the next few years because of automated spam like lawsuits that can be filed. And I had my own experience where AI enabled me to challenge something I would not have otherwise challenge. I’ve had a couple of experiences. You know, one, I got, I got a car accident. Doctor emergency room diagnosed with a concussion. Doctor diagnosed me again with a concussion. The person at the scene who hit the car said that they took responsibility. So I mailed this stuff to the insurance company. Was like, Hey, I got a concussion, and they write me back a letter saying we determined you didn’t have a concussion. It’s like, well, thank you. Claims handler, who’s not a doctor, but like, I know I did, and my doctor knows I did. I don’t really, like, want to sue you, because I want to work with insurance companies. I build software, but like, this doesn’t sit right, so I wonder if I can use AI to do this. And I found an AI Personal Injury Lawyer, zaf legal, regulated out of Utah, and I went to them, and I punched in the details of the case, and I generated a demand letter in a form that insurance company has to respond to with all the facts in my case, and I sent it to them, and suddenly they negotiated with me, and it’s like a very specific, unique example, but when it was as easy as me pushing a button to respond, instead of me being helpless, I responded. That’s the big takeaways. It’s less that they then engaged, it’s that I probably couldn’t have been bothered to really go deep on this. Nor do I want to invest resources or personal like reputation in doing it when I just have to push a button, when I can carry my favorite legal attacks on my phone and I just push a button, like flights delay, push a button, I’m making a claim, right? You know, I’m in a coffee shop, and I feel like I was discriminated against. Push a button, make a claim, right? It’s like, when you can start to do that, not only will you do that, but your competitors will do that to you. And I think that that’s where this starts to get really interesting. It’s not hard to imagine scenarios where, if it cost me nothing to back everybody who exited my competitor in filing lawsuits against them. I might do that like I’m not me personally, but you know, if I were like a big, big corporation that wanted to that, I could do that. We see this with mass arbitrations. So we saw Door Dash and Samsung and Amazon all get hit with 10s of 1000s of arbitrations in one day. The whole point of those arbitration clause is to make sure that doesn’t happen to you as a company. And suddenly it did some of that was tech driven. There are companies that use technology to do that. So, you know, I think we’re going to be in a world where companies are going to start to see a flood of claims, mostly frivolous, like spam, but some that are real. And you’re going to need a, you know, effectively, like a spam guard that catches the good stuff. So, you know what’s good. And so I think those tools are going to be really important litigation. Important, litigation management, that type of stuff is going to be really important for companies to implement. That creates a world of legal work being done by AI, right? And so you raise this question about, like, am I optimistic on some level about the future for knowledge workers? I think knowledge workers have a broad future. They’re just going to have to adapt. And the problem is, knowledge workers aren’t good at they’re not used to that type of being forced by the market to adapt in the same way. So, you know, I saw, I saw, like, a really nasty meme as federal government employees were being fired, that said, hey, federal government employees, you told all the white collar workers they should just retrain to be coders. Now it’s your turn, right? And it’s kind of like, you know, on some level, all of us in the professions have sort of been like, Okay, people are doing stuff that can be automated, like, they should, they should Up skill, right? Well, now that’s coming for you, and I think that that’s pretty challenging for law firms. To me, I think that that doesn’t mean you necessarily, like, have less billing, or you’re a less successful business, or you have less, even necessarily less lawyers, but it is that they’re going to start to have to replace what can be automated with other stuff they can do. I think strategic advice becomes really attractive for law firms because they already have all the confidentiality and privilege they know everything about all the businesses. So I would much rather get political advice from someone covered by. Privilege than political advice from someone who isn’t, and a lot of companies do run their political stuff through the legal department. For that reason, law firms can begin to Can, can amplify their capabilities in that regard. You know, the law firm model and equity model doesn’t necessarily reward the political consultant who joins and can’t be a partner because they can’t own part of the firm because they’re not a lawyer. But I think the firms as a whole have a future. But look, my view on Jean’s paradox, which is usually like, what everyone just says, they just kind of like, flatly assert, like, oh, well, when it becomes easier to do legal work, there’ll be more legal work, and therefore there’ll be more work for lawyers. It’s like, net, net. That may be true, but the distributional consequences may be, for the next five years, a whole bunch of people don’t have a job, and that becomes really problematic, right? And so if I tell you, in 10 years, you know, there’ll be 3 million lawyers in america instead of whatever it is, 1.5 or something, 1.3 right? Like that might be true, but getting from here to there is going to have distributional consequences that are really paying

Greg Lambert 30:58
Yeah. Well, don’t underestimate the bar association with Ron the Mikey ranch in progress, because I’m thinking like, is there going to be some type of anti slap style regulation on these AI generated lawsuits, I guess. And wait, we can’t get out of our own way. Sometimes I’m not sure that that that’s going to help in the situation, but, and I can see the rules and Bar Association fighting.

Marlene Gebauer 31:32
They will fight. They always do whenever, whenever they are at risk.

Sean West 31:35
So what happens when it’s like one registered lawyer in Arizona using all the AI to make themselves like the most SuperHuman lawyer you’ve ever seen, filing all of these cases, and they’re technically complying with the rules, and they figured out how to create a troll form effectively that enables them. And so I think it’s, it’s this thing where, like, the law, will still have the veneer of being human. I don’t you know, even if it’s automated, it’s just a it’s a force multiplier, but it’s gonna be very hard to figure out what’s been generated by AI and what hasn’t like today, you can do that by, like, reading the language and understanding that that AI talks kind of in, like, very stilted tones. But it’ll get that’ll get better, and so it’ll, you know, it’ll be hard to block it, because you won’t know it’s like, you know, could I block every inbound sales email that I get offering me development talent like, like, my spam guard is pretty good because it usually uses the same language, but the stuff gets through because people know,

Greg Lambert 32:30
I noticed there was a feature on one of the AI tools that that I’m using that that says, Do You want to increase the amount of anti AI identification language, right?

Marlene Gebauer 32:46
And so it’s, I mean, even with prompt, she could do that, you know, make that more casual, make it make

Greg Lambert 32:50
make it sound like it’s not, yeah, not sound like you. So, Sean, I want to talk to you. You’ve got this phrase that you use called front stabbing, and it kind of reminded me, like, when I moved from when I was a kid, I moved from Illinois to Mississippi, one of the things that that someone mentioned to me, it was like, well, at least here, you know where everyone stands. You know they’re not going to they’re not going to stab you in the back. They’re just going to stab you right, right in the front. And we’re seeing more of that, I think, especially, you know, with with our governments, we’re, I think we’re seeing it more in our approach, the how, how we’re addressing business,

Marlene Gebauer 33:32
whole meeting with the Ukraine, where it, yeah, that’s all publicized, as opposed to kind of having that happen behind closed doors,

Greg Lambert 33:39
And, you know, maybe in a way, that that’s a good thing sometimes, you know, most of the times, it’s a bad thing. But I’m just curious on what kind of effect do you think this, this has on legal, tech industry, and even things like, you know, lobbying. How does this work?

Sean West 33:57
Yeah, it’s a great question. So, you know, I use the phrase front stabbing. I recall, I recall the management meeting at a prior job where a colleague just started screaming, like, where I come from, we stab people in the front. And I always remember that, you know, great, they said that. And Anthony Scaramucci, who was Trump’s press lead for 11 days in the first term Dr Phil of all shows, and talked about how he’s a front stabber. And you know how he had gotten, he had gotten stabbed in the front by Trump, basically. And so when I started seeing these lists Trump was putting together, of, like, you know, oh, like, you know, you’re an undocumented immigrant. Like, register like, sign up for my list. It’s like, these are lists of people that we can just wipe away effectively from the country, and you’ve seen it in the brazen politics, and you mentioned some examples of it with allies or former allies or purported allies, right? So to me, I think there’s an interesting question about what this means for technology and for business. I think what it means politically first is that, yes, everybody may know where they stand, but it. Also makes politics an actual knife fight, and that’s a little different than it’s been before. So there’s a there’s always been sort of smoke filled rooms and back dealings and and all that kind of stuff. But when you actually, literally can politically, like, trot out everybody that is against you and, like, knock them down, that’s what you see in emerging markets with militaristic leaders. That’s not what you see in democracies, where the goal is everybody’s representatives of the population. They should be bargaining and trying to try to figure out which direction we’re going. Now, that’s not been the way American politics has been for a while, but it’s notionally where we should be going and going a different direction. It’s like pretty damaging, because it begets the next administration to come and do the exact same thing. It’s already been the glass has been broken. You can do that when you think about it from well, I’ll actually share an interesting example. So this question on transparency is a really important one, like, is it better just to know? So a friend of mine and I wrote about this on LinkedIn yesterday, was holding a conference in Florida, and the Trump administration got in touch and said, Would you all like to meet President Trump? And they said, there is a menu of options you have. And they said, for a million dollars, you can have a one hour meeting with President Trump. For $750,000 you can meet vice president Vance, and for $500,000 you can meet Elon Musk, right? This is, I’m not joking. This is like a CEO told me they’re presented with this list of options, and they declined because, not because of the money, but like, that’s not something they were going to do. But I raised the question, like, Okay, you first hear that, and you’re like, wow, that’s super slimy. And like, gross. And like, what is that? Or at least I did, but then you stop and say, like, is this the way it’s always been? This is just like, we’re now seeing the transaction. Like, if I donated a million dollars to any presidential candidate, and I call them and say, I want to be ambassador to whatever country, like my odds of being ambassador to that country are very high, right? And so I think, you know, is the to me, there’s a question of, like, is this the way politics has always been? We can now just see it. And so does sunlight disinfect? Because people are like, wow, this is gross, or I don’t like this. We’ll see from a business and a legal tech perspective. I think the onset of both technology to push your agenda and attack your foes at the same time that politics gets nastier, is a much more pugnacious environment. So there’s been, you know, a number of different researchers. John nay, is a fellow at Stanford, Bruce Schneier, who’s a cyber security expert, have written about creating robot lobbyists that can actually do the work, and they’ve done a lot of testing of this. So imagine a world where an AI can detect the one word you need to change in a law to give you maximum benefit, and then automatically send letters to Congress saying change and to but in this legislation, nobody will notice, and we will, you know, we will back you at your next re election, right like that’s an really interesting world to live in. You can argue that, like small companies would potentially have an outsized voice in that type of world, because they can use technology to give themselves a voice. You can also argue big companies with bigger budgets will have better AI, and you can also argue that if you’re getting a whole bunch of spam, the government will stop paying attention. And so I think, you know, there’s some really interesting reflections. But when business and politics have the gloves off, it rarely rewards the disadvantage or the smaller players. And so I think that’s, you know, you know, you move towards a world of consolidation in this type of environment, and that’s why you saw the tech leaders coming out and saying, Sure, we’ll give you that million bucks. Like, yeah, we’ll be at inauguration. Right? Why wouldn’t you know if my job is to deliver value to shareholders, like, there’s a good return on investment? Someone joked in response to my post on this, I can already see the tech startups building in the million dollar line item in their go to market for meeting with President Trump. It’s not clear to me that that then many venture capitalists would tell you that that’s a bad investment. Might, might be a better investment than a million dollars on a domain

Marlene Gebauer 38:50
name. Well before we leave, we want to talk about, Hence, global that’s what you call a geo legal product. So it’s an AI powered platform. It it digests global news for political, legal and operational impact, and specifically you know, related to you and your business and your clients. Tell us a little bit more about how that works and what the benefits are for for those who use it,

Sean West 39:18
yeah, I’m super excited about this, this product, and we’re seeing a huge amount of uptake. We’re in soft launch right now. This is probably the first time I’m really amplifying it publicly, and what we’re seeing is law firms coming on, using the software to digest what’s going on in the world, to watch how, understand how their clients are being impacted by politics, and see it all from their perspective. So the intellectual property lawyer at your firm will see a very different view of what matters, politically and legally than the employment lawyer.

Marlene Gebauer 39:46
They’ve said it themselves. Is that how it works?

Sean West 39:49
So the system does a lot automatically, but when you put in, when you put in your job, you have the opportunity to put in a sentence or two about the types of things you care about. And that can change. You can change that, but the system serves up. Completely different news for different types of lawyers within the organization. So in effect, it’s mass customization, right? It’s the ability to get something that’s just for you without a whole lot of effort. What’s most exciting about it is the world is filled with noise, like there’s huge amounts of news every day. Those Google Alerts are great, but you can’t read them because most of the time it’s stuff you didn’t care about. So the system knows you. It surfaces what matters to you, and it’ll do an on the fly analysis of how that affects you or your business. So if you’re a law firm and you see the fact that there is a political event that’s occurred, you can generate an analysis of where it presents opportunities for you to reach out to your clients. So it’ll look at it and say, this is an opportunity for your disputes practice. This is an opportunity for your intellectual property business all tied to what’s happened in the news. So law firms are looking at this to enhance their marketing, their client outreach, their situational awareness. But it’s not just law firms right in house teams are using it as well. We’re actually rolling it out to founders, like in a lot of companies, where people don’t even have legal or government affairs, they still care about the law. And so while they wouldn’t have paid $100,000 for that advice, they’d pay $1 right? I have a business in four countries. I’m a founder. I don’t have a huge amount of money to pay for advisory work, but if I could pay $1 and understand how the politics and all those places are affecting my my business, I’ll pay that dollar. I might pay $10 and so my job is to discover the price they’ll pay. But I can tell you, for anyone listening, you can pay about a third less if you use the promo code geek when you when you register for global.Hence.ai which is where you can find the product and give it a shot. There’s a free trial. So it’s not, you know, you’re not committed. But really, the product’s been designed to be purchasable by an individual practitioner. So to take us full circle to the beginning of this episode where lawyers are swiping credit cards and buying ChatGPT, that’s our vision for this product, right? An individual partner wants better situational awareness. They invest in their future. It’s the cost of taking a client out to dinner. It’s the cost of what some of them charge for one hour of advice. You get a year of it, they swipe the credit card, and then they’re better informed and hopefully able to earn 100 times that in the advice that they give their clients. Well,

Greg Lambert 42:08
Sean, we’re at the point now where we ask our guests pull out their crystal ball and peer into the future for us. So looking into your crystal ball and looking ahead, you know what? What are you seeing? Trends that you think will have the most significant impact on technologies and business and society over the next few years, and how should organizations and individuals, what should they be doing to shift for that right now?

Sean West 42:38
So look, I think it’s clear that there is a potential scenario where we live in a world of abundance, and our job is to figure out how to distribute that abundance, and that’s not an easy political question to solve. So all those out of work lawyers we talked about like they still have car notes and second mortgages and nanny payments and things like that. Like, does universal basic income cover that or not cover that? Right? Like, world of abundance is not necessarily a world of simple answers, but it’s a better scenario than many others. And I am very optimistic about the productivity gains from a lot of these products and about how they’re going to barrel through all the regulation holding them back, and it’s going to be nasty, and there’ll be some, you know, some casualties in the process of trying to figure out, like where the guard rails are, but ultimately, I think society will be much on much better footing. But the big lesson, and I read about this The book is called unruly, both because the world is volatile, but also because the rules are shifting and eroding. And the big kind of takeaway, I have a whole chapter on tech and job replacement. The big takeaway in that chapter is that you could move towards that, like more socialist UBI scenario, or you could move towards authoritarianism, because technology enables a lot of surveillance. It enables the use of data. And if you look at the Chinese model, where you know if I if I say the wrong thing on social media, I might not be able to get a mortgage. Like that is technologically feasible in a place like the US, and I think people are more worried about it today than they’ve been in the past. The reason I highlight that is because that has some very real implications for businesses that have to navigate through that transition. My advice would be, think about where you’re going to implement these tools and how you’re going to protect your staff, because if you’re going to actually, if you’re gonna replace staff with AI, let’s front stab about that. Let’s be vocal, right? But offer retraining. Offer to fund it. AT T when it did its big digital transformation, spent a billion dollars funding education for out of work employees to Up skill and continue to work with the firm. They didn’t want to lose the firm. Specific knowledge, if you’re not, you know, if you take a decision, we’re not going to replace workers with AI for the next three to five years. Say that, commit to that, because you will be a magnet for human talent. Everyone will want to work for you because they love job security. So part of this, I think, is thinking about the future scenarios and trying to take a bit of a medium term perspective on really, what. Are your values as a company, and where do you want to be and take a position on it? Right? You can’t hide. If you go, if you go pretend you’re not implementing AI and replacing jobs, then you say, oh, sorry, like suddenly we have 20% less staff, like the rest of staff is going to have anxiety about that. You have to be a little transparent about your thought process

Greg Lambert 45:17
too. All right. Well, Sean west, from hints global, and the author of unruly fighting back when politics AI and law up in the rule of business. Hey, thank you very much for coming in and talking with us. This has been fun.

Sean West 45:32
This has been absolutely the most fun I’ve had all week.

Marlene Gebauer 45:36
And thanks to all of you our listeners for taking the time to listen to the geek and review podcast. If you enjoy the show, share it with a colleague. We’d love to hear from you. So reach out to us on social media. You can find us on LinkedIn. And blue sky,

Greg Lambert 45:48
yeah, and Sean, we’ll make sure that we put the links on the show notes. But what’s the best way for listeners to reach out and learn more about your book or about you?

Sean West 45:57
Yeah? So if you go to Hence.ai, you can learn about the product, and you can sign up from there. The book on release available on Amazon. It’s available everywhere. And if you want to reach out to me, follow my sub stack, geo legal, dot. Sub stack.com and reply to any weekly email you get, there is a human on the other side named Shawn who will write back to you.

Greg Lambert 46:17
Wow, that’s interesting. A human on the other side you thought who would have that unique.

Marlene Gebauer 46:21
And as always, everyone, the music you hear is from Jerry David DeCicca, so thank you very much.

Greg Lambert 46:28
Thanks, Jerry.