This week we welcome back Niki Black to unpack the findings from the newly released 2026 Legal Industry Report from 8am The conversation centers on a legal profession moving into a new phase of AI adoption, where individual lawyers are embracing general purpose AI tools at a striking pace, while many firms still lack even basic policies or training. Niki explains that this disconnect is especially visible among solo, small, and mid-sized firms, where limited resources often slow formal governance even as day-to-day use rises fast.

A major theme of the discussion is the widening gap between personal experimentation and institutional readiness. Niki notes that lawyers are not waiting for permission, and many are already relying on AI to support research, drafting, and routine work. At the same time, firms are struggling to provide guidance, training, and guardrails. The episode highlights the growing risk of shadow AI in legal practice, especially when lawyers and staff turn to unsanctioned tools to keep pace with client demands. For smaller firms, the answer is not elaborate bureaucracy, but practical direction, clear expectations, and a recognition that even a modest policy is better than none.

The conversation also turns to client expectations and the economic pressure AI is placing on the traditional law firm model. Greg and Marlene press Niki on whether firms are truly ready to move away from the billable hour as AI compresses the time needed to complete legal work. Niki argues that large firms face deep structural obstacles because compensation systems, staffing models, and internal economics remain tied to hourly billing. Still, she sees pressure building from in-house counsel, boutique competitors, and smaller firms that use technology to deliver comparable work at lower cost. The result is a market that may resist change, but not escape it.

Another standout part of the episode explores how AI is reshaping access to justice. Niki points to the promise of generative AI as a force multiplier for legal aid lawyers and public defenders, especially when paired with trusted tools and better funding. She rejects the idea that technology alone will solve the justice gap, but makes a strong case that AI, combined with stronger institutional support, helps lawyers serve more people with better results. At the same time, the hosts and Niki acknowledge the risks of a two-tiered system, where wealthier clients benefit from high quality tools while vulnerable users face lower quality, error-prone outputs.

By the end of the episode, the conversation expands from AI tools to a broader structural shift across firms, clients, and law schools. Niki sees the next three to five years as a period of deep change, where pricing, training, competition, and professional expectations all evolve at once. She also shares her own methods for keeping up, including RSS feeds, trusted blogs, and LinkedIn, with a few playful complaints about Substack making life more complicated. The episode leaves listeners with a clear message: the biggest issue is no longer whether AI will affect legal practice. It already is. The real question is whether the profession can adapt fast enough to manage the consequences wisely.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | Substack

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript:

Marlene Gebauer (00:00)
Hi, I’m Marlene Gebauer from The Geek in Review, and I have Nikki Shaver here from Legal Tech Hub. Nikki is going to tell us a little bit about what Legal Tech Hub is doing at Legalweek.

Nikki Shaver (00:13)
That’s right. Thanks, Marlene.

Hi, everyone. Legalweek is coming up March 9 through 12. We’re super excited, especially to see what it’s like for the first time at the Javits Center. And of course, we’ll be there. We hope all of you will come and see Legal Tech Hub there.

We’re going to be there in full force this year with representatives from our advisory team, Cheryl Wilson-Griffin and Sam Moore, so you can find out more about our offerings there. Our sales team, of course, headed up by Alex Koop, will also be there.

Stephanie Wilkins will be there covering everything that’s going on, so if you have a news story, please seek her out. And our leadership team will also be there. We have a booth at Stand 341 in the vendor hall, and it’s a great time to catch up on everything new in legal tech. So if you want to stop by and chat and find out what’s new and what’s happening, feel free to come by.

Also, if you want to find out more about our full suite of offerings across LTH Premium, our advisory services, and our upcoming events, it’s a great time to stop by and say hi. And if you want to arrange a meeting ahead of time, feel free to reach out to us on LinkedIn at our company page, Legal Tech Hub, or at our website, LegalTechnologyHub.com. We can’t wait to see you there.

Marlene Gebauer (01:38)
Yeah, it’s a great conference, and it’s great to hear that you’re going to have a presence there.

Marlene Gebauer (01:51)
Welcome to The Geek in Review, the podcast focused on innovative and creative ideas in the legal industry. I’m Marlene Gebauer.

Greg Lambert (01:58)
And I’m Greg Lambert. Marlene, we’ve spent a lot of time, really only the last few years, even though it feels like the last few decades, talking about the promise of artificial intelligence. But today we’re going to dig into the hard data of what is happening on the ground, especially in solo, small, and mid-sized firms that make up most of the legal industry itself.

Marlene Gebauer (02:25)
Yeah, and they’re the ones that really aren’t reported on as much, I think. We love a good data story here at The Geek in Review, so we are thrilled to welcome back our good friend, Niki Black. Niki, hi, Niki. For those of you who don’t know, Niki is an attorney, a veteran legal journalist, and the Principal Legal Insight Strategist at 8am.

Greg Lambert (02:38)
Yay.

Niki Black (02:39)
Hi.

Marlene Gebauer (02:51)
She’s here to break down the newly released 8am Legal Industry Report 2026, and it provides an incredible look at how legal professionals are navigating AI adoption, billing pressures, and systemic challenges to the justice system.

Greg Lambert (03:07)
So, Niki, welcome back to The Geek in Review. Good to have you back.

Niki Black (03:10)
Thanks. I’m looking forward to this.

Marlene Gebauer (03:13)
All right, so we’ll jump right in. The 2026 data paints a really vivid picture of a profession rapidly adopting AI from the bottom up. Your report notes that 69% of legal professionals are now personally using general-purpose AI tools for work, more than double from last year, I think. However, there seems to be a massive governance gap. So what?

Greg Lambert (03:37)
Governance. How do you spell that? I don’t get it.

Niki Black (03:43)
Okay.

Marlene Gebauer (03:45)
Forty-three percent of respondents say that their firm has no formal AI policy and no plans to create one, and only 9% are actively enforcing a written policy. Yikes. So with individual adoption skyrocketing while organizational governance lags, what’s wrong with this picture? How critical is the threat of shadow AI in the legal sector right now?

Are we sitting on a ticking time bomb of breached client confidentiality because staff are using unsanctioned tools to meet daily targets?

Niki Black (04:21)
Well, that’s a really interesting question. And it was really interesting looking over the data this year because it’s our third year tracking generative AI adoption and perspectives on generative AI. And it really did stand out that you had more than double the number of individuals using generative AI, more than twice as many compared to last year. And that’s an unbelievable jump in our profession, right? For technology to be adopted that quickly.

But another really notable statistic was the data on governance and how there’s not a lot of training or policies being rolled out. I do think that those data points, in some ways, absolutely correlate to the people who actually responded to this survey. As Greg mentioned at the beginning, it was solo and small firms, and some mid-sized firms, predominantly. There are some large firms in there too, but I do think that the larger an organization gets, the more likely they are to have people devoted to governance and training. And the smaller it gets, the more of a lift that becomes. So I think that’s reflected a little bit in the survey results.

But I also think that this lack of guidance from firms is in part due to how fast people are adopting this tool. I mean, it’s hard to keep up and understand what the guidance should look like. So I think that contributes to the gap as well.

Greg Lambert (06:02)
So if it’s a solo or small firm, let’s say one to four or five attorneys, do they really need an official governance package? How do they do that? Is it something they actually write down, or is it something everyone kind of agrees to?

Marlene Gebauer (06:16)
I was thinking about the news this morning. I was like, yes, they definitely need to.

Niki Black (06:28)
Right.

I mean, I think every firm needs guidance and training on AI. And if it’s a solo, then you’re guiding and training yourself. But once you have employees, you need to have something. And AI makes it easy to come up with a plan. So why not use the tool to help your firm figure out how to provide this information and create some guardrails for your employees? So I think there’s a simple solution to the problem, and it’s to use the tool to help you give guidance on the tool.

Greg Lambert (07:01)
Use the AI to help you AI.

Niki Black (07:04)
Right.

At least I think it helps you get the information you need and collect it and start a draft, right? And then from there, you do what you’re supposed to do, which is edit, revise, ensure accuracy, do a little legal research, and then roll it out, right?

Greg Lambert (07:22)
Let me transition a little bit here from the firm to talk about transparency between firms and clients. In the report, I think it explicitly notes that client-driven pressure remains pretty small when it comes to asking for, if you’re using AI, I need to see some cuts in my bill for this.

But then when we look at the corporate side, the ACC and Everlaw data indicate that something like 60% of corporate clients have no idea whether their firms are using AI or not. So if there’s a lack of client pressure reported by your respondents, and there’s a temporary illusion based on the retail nature of solo and small firm clients, how quickly do you think demand is going to start increasing and people are going to say, you should be using AI and my bill should be getting smaller?

Niki Black (08:37)
Well, that’s such an interesting question and issue. I’ve long said that corporate counsel, and some of the most sophisticated legal consumers, are corporate counsel and insurance companies, right? Because they are sophisticated consumers of legal services. Both of them buy legal services from lawyers and obviously know a lot about how much time it should take to get these legal services performed.

So I’ve always thought you would see those two groups initially putting the pressure on to use these tools, increase efficiency, and reduce costs in a sophisticated way because they’re sophisticated consumers. And I do think that from the data coming out about corporate counsel, that’s what’s happening.

When you think about traditional legal consumers, when it’s not B2B, which is what corporate counsel and insurance are, and you think about certain types of legal consumers, criminal law, family law, trusts and estates, these are individuals who need to hire a lawyer for some reason. Oftentimes they don’t expect to have to hire the lawyer. They have to hire the lawyer without planning to. They aren’t sophisticated purchasers of legal services.

So I think it’s going to take a lot longer for them to expect that kind of transparency or make that kind of demand because they don’t understand what the practice of law looks like. They’re trying to get a lawyer to solve their problem. So they don’t know whether it’s being done efficiently or not. And they can’t compare apples to apples, or apples to oranges, I don’t know what the right comparison is. Legal services performed with AI versus legal services performed without it, they don’t know the difference between those two yet.

But once they start using it more and more in their own work, they may start to want to understand how lawyers are using it. And I think that gap will start to close. But it is pretty interesting and significant.

Greg Lambert (10:43)
Let me twist that question a little bit. Are lawyers seeing clients show up with AI-generated information to help them talk the talk when they meet with a lawyer? Are they showing up better prepared? Because Marlene and I were talking to a class of students this morning, and one of the things I brought up was that we’re seeing more pro se filings that are pretty sophisticated because people are starting to use AI tools to help them draft their complaints.

Like it or not, AI is getting better, and it’s making some of these complicated issues more accessible for pro se litigants to bring into court. So I’m curious, are you hearing anything about attorneys whose clients are showing up with more information, or at least paperwork they printed out from ChatGPT?

Niki Black (12:06)
Well, that isn’t an issue I’ve spoken to attorneys about, but I think it’s a good supposition. It used to be that people were a little frustrated with the Google lawyer, right? They’d show up thinking they knew a lot about a topic because they Googled it. And I’ve got to imagine you’ve got a little of that same thing going on with ChatGPT, or these other generative AI tools, in that clients are doing a lot of research ahead of time.

But they may be going down completely wrong roads in their research, down the wrong path, because they don’t really understand what the procedural aspects are supposed to look like or what the actual legal issues are. So I do think, depending on the sophistication of the legal consumer, it may be helpful, or it may take longer to walk them off that path and explain to them that’s not the actual issue you’re dealing with here, and this is what you’re dealing with. And then you’ve got to butt heads with them because it sounds great when it comes out of generative AI, right?

So yes, I think you’ve got part of that going on. And I think that pro se issue is really interesting. I just read a hallucination case where the pro se litigant brought so many motions and filed so many memos, and it sounded like they were driving defense counsel nuts because at one point the court basically said, defense counsel, you’re obsessed with their AI. You need to stop asking about it. They can use it.

I think it’s overwhelming. You know how lawyers talk about a flurry of papers filed on a Friday before a holiday or something? I think it kind of feels like that sometimes on the defense side when you’ve got a pro se plaintiff, because all of a sudden you’re overwhelmed with papers, some of which don’t make a lot of sense, and you have to unwind it and then respond to it. So I think it’s interesting, all these different things happening and happening so quickly in our legal system.

Marlene Gebauer (13:55)
And pro se aside, I also wonder how some of these recent rulings are going to impact clients’ use of some of the gen AI tools. I mean, I suppose if they’re doing their own research and then sharing research with their attorney, is whatever they’re providing still privileged? That’s also something they’ve got to think about.

Niki Black (14:23)
Yeah, I just wrote a couple of different articles about that for the Daily Record, and there’s a split right now. There are only two courts that have even looked at that, apparently, as far as I know. And there is a split in the federal district courts about it. One says it’s privileged. One says it’s not privileged. Another says it’s work product.

One is treating it as a third party, that the information has been disclosed to someone else. It’s a little more complex, and the factual scenario is different. But essentially, one says it’s disclosed to a third party, and another says it’s work product and protected. So it’ll be interesting to see. I mean, I think it’s got to go that way. Otherwise, you’re not going to be able to use the cloud or email or anything else.

But yeah, we’re going down a different path here, but it’s a really interesting issue. I’m glad you raised it.

Marlene Gebauer (15:01)
Exactly.

Greg Lambert (15:02)
Yeah.

Marlene Gebauer (15:06)
So I’m going to tee up this next pretty straightforward question with a little bit of background. The report highlights that nearly 50% of legal professionals expect AI to change billing practices, with 25% expecting reduced billable hours per matter and 22% anticipating more flat fees. And Thomson Reuters has recently warned of an impending AI bubble, noting that law firms are currently seeing record profits largely driven by rate hikes rather than passing on AI-driven efficiency gains.

I have issues with that statement, but that’s what’s reported. So the question is, we’re seeing a structural conflict between firms investing heavily in efficiency and still remaining wedded to the billable hour. Are you sensing that firms are prepared to abandon hourly billing in favor of value-based pricing? Are we there?

Niki Black (16:17)
I think for large firms, that’s going to be challenging in the short term. And I’ve never practiced law at a large firm, but people I know well who have, and who are partners in these firms, have explained to me that the compensation structure and everything else about how those firms function are so tied into the billable hour that it’s not a matter of changing how you bill. You’re going to have to change so many other aspects of how that firm is structured, and all the cascading things that are impacted by that.

So I think for large law firms, the pressure is going to come from a few places. First, from in-house counsel requiring them to change it because otherwise they’re going to use someone else. Second, from boutique firms that spin off. This has always happened. People leave large firms, spin up a boutique firm in one practice area, take a bunch of clients with them, and are able to use AFAs and charge in different ways that undercut and compete with hourly billing while still providing the same level of big law service, knowledge, and expertise.

So I think you’re going to see more of that happening, especially as these firms start cutting people the way they’re doing because of AI. They’re reducing associate levels and admins, but eventually they may start cutting not only entry-level associates, but actual associates and possibly partners. Who knows what they’re going to do.

But the other place I think you’re going to see it is from solo and small firm lawyers who may have prior experience and are going to be able to start competing in ways that technology has already enabled, but at a much higher level. And potentially do the same thing these boutique firms are doing, undercut the pricing and provide similar levels of service at a lower price. So I think that’s coming. They’re not going to do it willingly, but it’s going to happen.

Greg Lambert (18:29)
One issue that almost always pops up every time there’s any kind of change in the legal marketplace is, well, in-house counsel is going to handle more of their work in-house. Do you think that’s also a possibility? We always project that, but it never really happens.

Niki Black (18:55)
I think that with AI, it’s already happening in the way you suggested it would happen with consumer clients. They are doing a lot of that initial work that they would have had to hand off, and they’re handing it off in much better shape, where a lot of the initial legwork is done. And then they want outside counsel to finalize the product or provide the final answer.

So I think it is already happening. And the Claude for Legal tool, I’m not sure of the exact name, but that was just released, and I think it’s going to expedite that process even more.

Greg Lambert (19:31)
Well, speaking of Claude and CoCounsel, pretty much up until a couple of months ago, AI was ask it a question, get an answer. So in the survey, it showed something like 58% usage for legal research and 49% for drafting documents. This is where I give it a specific task, it comes back with an answer, and I may clean that up a little bit, but that’s about it. It’s give and take.

Now we’re getting more into agentic AI, where you’re basically giving it instructions on the output you want, and then it goes out and does it, where the systems kind of work autonomously. So if a significant portion of the legal industry is struggling to figure out how to draft basic governance policies for the ask-it-a-question, get-an-answer part of it with these text-generation chatbots, how equipped are we to handle the complex ethical and malpractice risk associated with autonomous, agentic AI tools?

Niki Black (20:57)
It’s a good question. Part of it comes down to how you define agentic AI in the context of legal work, which is a big debate a lot of people are having. Is it kind of like how LexisNexis and Thomson Reuters have things set up, where you have a generative AI chatbot at the front that answers some legal questions, and then when you want to go deeper, it determines which subscriptions or integrations you have and goes to those and pulls out the information you need?

That’s arguably agentic AI, but some people say no, because it’s in a closed system. It’s not using the internet to go out and do other things, which some people claim gets closer to the practice of law. Do you let it just go out there and practice law for you? I don’t know.

Without even having a full definition of agentic AI, and with the fact that it’s already being built into a lot of legal technology tools, lawyers may not realize they’re using it. Assuming we agree that that is agentic AI, which I think it is, I think they’re already using it in some cases. It’s just more of a closed system, so it reduces the risk a little. And they still have that obligation to review the output and make sure it’s accurate. So you still have that framework in place.

Greg Lambert (22:21)
Well, let me define it a different way then. Let’s say I’m using Claude CoCounsel, which can read and write files on my own system. It can search the internet. It can interact with Chrome web browsing. So it can browse the web, not just on the back end, but using your browser going out. This is going to be a seismic shift, I think, in how attorneys use AI tools, or at least how clever attorneys use AI tools, to work on 10 or 15 different things at once and bring back results, analyze them, save files, and update files.

It seems like we’re ill-prepared for regulating that and for governance on that. I don’t see us ready for it.

Niki Black (23:27)
No, I think you’re right. I just switched to Claude and have been playing around with CoCounsel. And there was that LinkedIn post where a lawyer, I think a corporate lawyer, was talking about how he uses CoCounsel, and some of the things he said he was doing gave me pause because we all know these tools, once they start diving into actual documents that have complex information in them, start making things up right away. It doesn’t matter which one you’re using. They make things up.

And then when they take whatever they made up and apply it to something else, which is what would happen in this scenario, it becomes a game of AI telephone. That’s where I get a little worried. The output is going to be so bungled that I don’t understand how you can, in the current state, use it in that way, especially when it is agentic and does go out and get information and pull that information in from your documents and client files.

Right now, I don’t see it working the way this person talked about. I don’t know if you’re familiar with the post I’m talking about.

Greg Lambert (24:51)
Yeah, I saw it.

Niki Black (24:55)
Yeah, it was like you left it alone and let it go analyze all these things, and then it came back and he sent the letters or emails off to opposing counsel in the middle of the night. It made it sound like, how was he carefully reading the output, and how was it analyzing the contracts that carefully and then coming up with emails that made sense and didn’t have hallucinations in them? To properly review and ensure that wasn’t happening, it would take an hour, I would think, at least, to go over what it did. So I’m not sure.

It sounds good on paper, but I’m not convinced, just like you aren’t, that the output at this point is going to be reliable and trustworthy. And that impacts the ability to provide guidance on using it. I think we’re in a weird place because the tech is advancing so quickly and there is a lot of potential, but the reality still seems a little questionable when you get into things that complex.

Greg Lambert (25:55)
Yeah, I always worry about the things that go viral that sound a little too good to be true. So we’ll see. I think what he was saying was pretty accurate in the broad scope of things, but I think there were some issues being taken with the specifics. But yeah.

Marlene Gebauer (26:01)
Need to.

Niki Black (26:17)
Well, and I think that post was written by AI, by the way, because there was a lot of not this, but this going on throughout it. So it was written by AI.

Greg Lambert (26:22)
Yeah, some dashes, some not this, it’s this.

Marlene Gebauer (26:25)
And dashes.

Niki Black (26:27)
So you wonder if it even said what he wanted it to say. But who knows.

Greg Lambert (26:32)
Hahaha.

Marlene Gebauer (26:35)
I didn’t mean that. So I know the report had information on access to justice, and I want to shift the conversation a little bit to that. A strong majority of respondents, 76%, are optimistic that AI is going to help narrow the access to justice gap. And 72% admit the sheer cost of legal services is still a primary barrier. Then 62% of lawyers agree that the rule of law itself is currently under threat.

So there’s a real fear, I think, that AI could create a two-tiered system, premium, highly accurate AI for wealthy clients, and hallucination-prone solutions for pro se litigants. I think that’s rather gloom and doom myself, but how does the profession harness technology to democratize legal access and restore institutional trust rather than inadvertently widening the socioeconomic divide?

Niki Black (27:58)
I think the answer is in the data points you just stated. Access to justice is not a single-shot solution. There’s no one way to solve it. You absolutely need greater funding. And I agree with the respondents. That’s the primary thing that’s going to help fix access to justice in the way it should be fixed, where it helps average people get the legal help they need instead of ending up with a top-level system where the Big Four are competing and starting to provide legal services in ways they weren’t able to before because they weren’t licensed lawyers.

I think that when you have better funding, and then you have the technology supporting public defenders, legal aid lawyers, and others who are on the front lines trying to bridge that gap for people who can’t afford lawyers, if they’re getting paid through better funding and are able to use generative AI tools, especially ones built into trusted legal tools that are guardrailed on the back end and where they can trust the output and that are developed more slowly in order to ensure that output, then it allows them to help more people.

It allows them to get paid because of better funding and then provide more efficient and effective legal services, represent more people, and provide them with a higher quality of representation. And I’m not saying that in the abstract. I had a legal aid lawyer reach out to me after one of the articles I wrote saying that he started using ChatGPT when it first came out and was able to take on more clients because it increased his efficiency so much.

So I think when you combine those two things, it does make it easier to democratize access to legal services rather than creating that two-tiered system. But without the funding, I think it ends up creating a two-tiered system.

Greg Lambert (30:18)
Before we roll out and ask you the crystal ball question, is there anything else in the report that stood out to you, maybe something that surprised you or something else we haven’t covered?

Marlene Gebauer (30:39)
Yeah, we did not like that.

Niki Black (30:40)
Well, I thought the second half of the report was really interesting. The first half focused on AI trends, and then the second half of the report dove into systemic legal issues, access to justice, and how lawyers and other legal professionals were looking at the rule of law. I thought those data points were interesting because when you looked at the data from lawyers versus other legal professionals, I don’t love the term non-lawyers, but other legal professionals, you really did see different perspectives, in large part driven by the roles they play in the justice system. So I thought that was super interesting.

It’s an interesting time, and I think having data like this helps you get some perspective on what’s happening with all the rapid technological advancements and how that could potentially impact the administration of justice.

Greg Lambert (31:34)
Well, definitely something to check out. Before we get to the crystal ball question, what are some resources you’ve been using to help you keep up with all the changes going on in the legal industry lately?

Marlene Gebauer (31:56)
What are your go-tos?

Niki Black (31:57)
Well, I go old school. I kick it old school with my RSS feed reader, Feedly. That’s one of the primary ways I track what the people in the industry I trust are saying. And I’m also very active on LinkedIn, so that helps a lot too. But for me, it’s following the blogs of people I trust and seeing what they’re saying.

One thing that’s a little challenging now is that some people are moving things over to Substack. So it starts to get harder. There’s no RSS feed, and you’ve got to check Substack separately and follow the right people there. And that makes it harder when a few people I’ve really enjoyed have moved over there. Are you on Substack?

Greg Lambert (32:42)
So you’re saying we’re part of the problem.

Marlene Gebauer (32:44)
Exactly what she’s saying.

Greg Lambert (32:46)
Yeah, we’re testing it out.

Niki Black (32:52)
Well, but you also have the podcast. Is that behind the wall on Substack?

Greg Lambert (33:00)
No. We still put the blog and everything out. We’re just testing Substack for fun.

Marlene Gebauer (33:01)
There’s nothing behind the wall.

Niki Black (33:07)
I get it. I mean, people are able to create community and monetize what they’re doing more easily on Substack. And I’ve looked at it myself. It doesn’t make sense for me right now, but I see why it might for others. It gets to the point where if a whole bunch of people move there, then you’ve got to start working Substack into your day, you know?

Marlene Gebauer (33:28)
All right, so now it actually is time for the crystal ball question.

Niki Black (33:28)
Making my life harder, you guys.

Marlene Gebauer (33:32)
So now it actually is time for the crystal ball question. Looking ahead over the next three to five years, Niki, what do you think is the single biggest challenge or change coming for the legal industry that firms should start preparing for right now?

Niki Black (33:35)
Yeah.

I mean, it’s the bigger picture of everything we’ve talked about. It’s all the structural impacts that are going to come from all the changes happening right now, in large part driven by AI, but not only AI. Pricing changes, the ability for smaller firms to compete with larger firms, and that applies to law schools too.

I just spoke at my law school last week, and I talked to some of the deans afterward. And they’re all struggling to figure out what they do. How do they teach tech? How do they implement it? How do they prepare the students? How do they change what law school looks like, possibly? They’re going to have to prepare students more for solo practice, prepare them for other types of roles, and make sure they understand how to use tech to allow them to excel in those roles. So it’s not just law firms.

And I think client expectations are going to change. It’s this major structural change that’s going to happen really quickly. So that’s going to be the biggest challenge, figuring out how to adapt within those huge changes taking place.

Marlene Gebauer (35:07)
Big changes and time restraints.

Greg Lambert (35:10)
Yeah, definitely. Well, Niki Black, Principal Legal Insight Strategist at 8am, thanks again for coming back and joining us on the podcast and walking us through the future of the legal profession. Always a pleasure.

Niki Black (35:28)
Yeah, my pleasure. This was a lot of fun.

Marlene Gebauer (35:30)
Yeah, thank you, Niki. And thanks to all of you, our listeners, for taking the time to listen to The Geek in Review podcast. If you enjoyed the show, please share it with a colleague. We’d love to hear from you on LinkedIn and Substack.

Greg Lambert (35:41)
And on Substack. Sorry, Niki. Niki, for listeners who want to follow your work, read your columns, and download the 2026 8am Legal Industry Report, where should they go?

Niki Black (35:59)
Well, the Legal Industry Report is available on 8am.com under Reports, so definitely go there and download it. I think you’ll find it interesting. I wrote it, and I think there’s a ton of great stuff in there. And our team did a fantastic job supporting me in writing it, and the design is fantastic in my opinion. It looks great.

And to follow me, LinkedIn is the best place because I post my columns there and I have my newsletter there, Legal Tech Reality Check. So LinkedIn would be the best place to connect with me.

Marlene Gebauer (36:32)
And as always, the music you hear is from Jerry David DeCicca. Thank you, Jerry, and bye, everybody.