I’ll be the first to admit it. Back on February 5th, when Anthropic dropped Claude Opus 4.6 and OpenAI fired back with GPT-5.3 Codex on the same day, I was right there geeking out with everyone else. A million-token context window! A model that helped build itself! People were calling it the “Kendrick vs. Drake” of the AI world, which… well, okay, I’ll allow it just this one time.

The problem is, February 5th didn’t actually change how very many law firms and their lawyers worked the next morning.

April 10th did.

The Sidebar That Changed Everything

Last Friday, Anthropic released Claude for Word as a public beta.  A simple Word add-in. A sidebar.

And I think it might be the most consequential legal tech release so far in 2026.

I know, I know. A sidebar? But think about what it actually solves. Every lawyer I’ve talked to who didn’t have a Harvey or Legora plug-in has been using AI for the past two years has been doing the same awkward dance of copying text out of Word, pasting it into a browser, prompting the AI, copying the response back, manually formatting it, hope nothing breaks along the way. It’s the AI equivalent of printing out an email so you can fax it to someone.

The Claude Word add-in kills that workflow dead. You prompt Claude right there in the document. It returns edits as tracked changes. Actual tracked changes, in the review pane, with the formatting intact. Your multilevel numbering doesn’t explode. Your defined terms survive. Cross-references stay put.

For transactional lawyers, it goes even further. A single Claude conversation can span your Word purchase agreement, your Excel valuation model, and your PowerPoint deal summary. You can ask it to check whether the narrative in your financial report matches the underlying data without leaving the document you’re working in.

Is it perfect? No. It’s beta. Anthropic themselves have warned against using it for final legal filings without human oversight. And the security folks are rightly flagging the risk of prompt injection attacks embedded in external documents, something that should make every information governance person sit up straight.

But the direction is unmistakable. AI just moved from the browser tab to the document.

Meanwhile, Musk Brought a Debate Team

On the same Friday, xAI released Grok 4.20, and the architecture is genuinely interesting. Instead of one model trying to be right, Grok runs four specialized agents (a coordinator, a researcher, a logician, and a creator) that essentially argue with each other until they reach consensus.

The result? A 22% hallucination rate. The lowest ever measured by Artificial Analysis. For an industry where hallucinated case citations have gotten lawyers sanctioned by federal judges, that number matters.

But here’s where it gets weird. Reports surfaced that banks and law firms working on the upcoming SpaceX IPO (we’re talking JPMorgan, Goldman Sachs) have been required to subscribe to Grok. Not recommended. Required.

So now we have a world where your legal tech stack isn’t just something you choose. It’s something that gets mandated as part of the deal ecosystem. That should make everyone uncomfortable, regardless of how you feel about the underlying technology. I guess since no law firm or corporation AI strategy leaders have ever seriously said, “You know what we really want to build our AI strategy on? Grok Enterprise.” there has to be some incentives??

The “SaaSpocalypse” and Who Actually Survives

You can’t understand April 10th without backing up to February 3rd, when the announcement of Claude’s legal plugins triggered what the market delightfully called the “SaaSpocalypse.” A $285 billion single-day selloff in SaaS stocks. Investors panicked. If Claude can do contract review natively inside Word, what happens to all the legal tech companies whose entire value proposition is… putting a UI on Claude?

By April 10th, the dust had settled and the answer was becoming clearer. Ron Friedmann and the Gartner team noted (and just for fun, I did too) that Anthropic’s plugin wasn’t really a threat to companies like LexisNexis or Thomson Reuters. Those vendors have decades of curated case law and proprietary data that no foundation model can replicate overnight. They slept fine Friday night.

The companies that should be worried? The ones whose moat was “we integrated an LLM.” Because that integration moat just evaporated. The Model Context Protocol (MCP) is standardizing how AI connects to databases like iManage and Clio, and it’s doing to legal tech wrappers what USB-C did to proprietary chargers. If you’re a workflow platform like DISCO or Consilio, you’ve got some runway, but you’d better be “platformizing” fast. And if your company’s entire pitch was “we put a nice interface on Claude”… well, I hope the resumes are updated. Anthropic just showed up in your lane and they’re not charging a markup.

I keep coming back to what Stephane Boghossian nailed: “Prompting isn’t a product.” That’s the harsh truth of April 10th. The thin-wrapper era is over.

The Regulators Apparently Got the Memo

If it had just been the Claude and Grok releases, April 10th would have been a busy Friday. But the regulators decided this was their day to show up too.

The White House dropped a National Policy Framework for AI that proposes preempting state-level AI development regulations. That’s a direct shot at California’s SB 53 and Colorado’s SB 24-205. The framework includes a “Right to Compute” provision that would prevent states from banning AI-assisted legal research or drafting. For our world, that’s significant.

Speaking of Colorado, Elon Musk’s xAI filed a 75-page federal lawsuit (xAI v. Weiser) to block Colorado’s algorithmic bias law, arguing it would force Grok to promote a “State-enforced orthodoxy.” Now, I find the idea that an AI model has a “disinterested pursuit of truth” that deserves First Amendment protection to be… let’s say a stretch. But Musk’s lawyers aren’t dumb, and the underlying question (can a state force an AI model to produce certain kinds of outputs?) is one that’s going to matter a lot more in two years than it does today. Keep your eye on this one.

And in what might be the most practically useful development of the day, the Third Circuit ruled that legal tech startup UpCodes’ publication of copyrighted building standards online likely constitutes fair use. If that holds, it has real implications for every AI-driven legal research tool that needs to ingest government-published standards.

“Trust, Not Technology, Is the Constraint”

Here’s where I want to get practical, because I think the industry conversation has gotten stuck on the wrong question.

Everyone keeps asking: “Which model is best?” The benchmark tables are everywhere. Gemini 3.1 Pro and GPT-5.4 are tied for first on the intelligence index. Claude Opus 4.6 leads in codingGrok 4.20 leads in hallucination reduction. It’s all very interesting and almost entirely beside the point for most working lawyers.

The real question is: do your people trust it enough to use it?

Wolters Kluwer’s research shows 54% of clients now expect their legal partners to be AI-competent. Not “exploring AI.” Competent. They’re asking sharper questions about turnaround times and whether those times reflect an AI-accelerated workflow.

Industry leaders like Mark Brennan and Nicole Stone have it right: “Trust, not technology, is the constraint.” February 5th gave us the raw capability. April 10th started building the infrastructure to make that capability usable.

And I’ll tell you what I’m watching most closely: the emergence of law librarians and knowledge managers as the critical players in this transition. The people who have spent their careers managing information, evaluating sources, and training professionals to use complex research tools? They’re exactly the people firms need right now. Not sitting at reference desks but operating from what some are calling “Genius Bar” environments, helping attorneys learn to work with AI responsibly.

The more things change, right?

Was April 10th Really That Important?

I’m going to say yes.

February 5th showed us what AI could do. It was impressive. It was exciting. And for most working lawyers, it was abstract. April 10th showed us how lawyers would actually work. Claude moved into the Word sidebar. The White House started drawing regulatory lines. The courts weighed in on fair use. The market sorted out which legal tech companies have real value, and which were just wrapping someone else’s model in a nicer box.

February 5th was the day we all said “wow.” April 10th was the day somebody said “okay, but where does this actually go in my workflow?” And then it showed up in Word. Right there in the sidebar. Like it had always been there.

So here’s what I want to know: when the AI is no longer a separate step, when it’s just in the document, invisible, ambient… who at your firm owns the quality control? Is it the knowledge management and library professionals who have spent twenty years figuring out how to evaluate information sources and train people to use them responsibly? Or is that still being treated as an afterthought?

This week on The Geek in Review, we talk with Kristina Satkunas of CounselLink about what the numbers are saying in a legal market that still talks about change while clinging hard to old billing habits. Kris discusses the hard data behind outside counsel spend, drawing on CounselLink invoice data and Harbor survey results to compare what legal departments say they expect with what the bills are already showing. She makes the case that the objective data is stubbornly clear. Rates are rising, demand is not falling, and the biggest firms continue to capture a larger share of work.

There is a widening gap between hope and reality. Legal departments may believe they are on the verge of controlling outside counsel costs, moving more work in house, or shifting matters to smaller firms, but Satkunas notes that the billing data has not caught up to those ambitions. She sees some room for in-house expansion in more routine areas like employment work, especially with AI helping legal teams absorb more volume, yet the largest and most sensitive matters are still flowing to outside counsel. That tension gives the episode much of its energy. Everyone sees pressure building in the system, but the old habits of legal buying and legal staffing remain firmly in place.

The discussion also gets into the mechanics of better decision-making, and where there is practical value for legal operations leaders. Satkunas emphasizes that data only becomes useful when departments have enough discipline in their enterprise legal management systems to categorize work correctly, clean out outliers, and separate different matter types instead of lumping everything into broad buckets like litigation. She also explains why finance data alone will not do the job. The real insight sits inside invoice-level detail, where hours, rates, firms, and timekeepers reveal what is happening beneath the headline spend numbers. For listeners trying to build a stronger legal ops function, this part of the conversation feels like a polite but firm warning that dirty data still tells stories, but some of them are fiction.

There is an obvious strain on the billable hour model that AI is placing on it. Satkunas notes that while average partner rate growth has hovered around 5 percent, top-end lawyers are often raising rates even faster, especially as firms try to protect revenue from the work and people they still believe clients will pay for. At the same time, she argues that alternative fee arrangements have remained stuck for years, though AI may finally force movement toward value-based pricing. If technology reduces the hours required to complete the work, then the old logic behind both hourly billing and many flat fees starts to wobble. That leaves firms facing an uncomfortable question, which is how to price legal services based on value delivered rather than time consumed.

We’d say that Satkunas is neither cheerleader nor doomsayer. She is a patient observer of a market trying to pretend nothing is happening while the floorboards creak under everyone’s feet. Her prediction is that real value-based billing will begin to appear in pockets over the next couple of years, even as firms continue squeezing what they can from the billable hour in the meantime. For law firm leaders, legal ops teams, and general counsel, this episode is a sharp reminder that disruption does not arrive with a trumpet blast. Sometimes it arrives as a spreadsheet, a trend line, and a guest who quietly points out that the data has been trying to warn us for years.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | Substack

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript:

Continue Reading CounselLink’s Kris Satkunas on Rising Legal Spend, Law Firm Rates, and the Future of Value-Based Pricing

This week on The Geek in Review, we talk with Gregory Mostyn, CEO of Wexler.ai, about how his company is building a sharper form of legal AI for litigation. In a market crowded with broad platforms that aim to handle every legal task at once, Mostyn describes Wexler as a focused system built for one of the hardest problems in disputes, understanding the facts. He shares how the idea grew from watching his father, a judge, carry home stacks of ring binders and spend late nights reviewing case materials by hand. That early picture of legal work, heavy with paper and pressure, became the spark for a company aimed at helping lawyers work through massive records with more depth, speed, and precision.

A central idea in the conversation is Wexler’s view that the most useful unit of analysis in litigation is not the document, but the fact. Mostyn explains that lawyers are often handed a mountain of emails, messages, filings, and exhibits, yet what they need is a clear understanding of what happened, why it matters, and where the pressure points sit. Wexler is designed to pull out events, inconsistencies, and supporting details from that record so litigators are working from a factual map rather than a pile of files. That shift matters because disputes are rarely neat. Important evidence may be tucked inside an offhand message, a late footnote, or an exchange written in vague, coded language. Wexler’s aim is to turn that mess into something a trial team can use to shape strategy.

Mostyn also walks through the mechanics that separate Wexler from more general legal AI products. He describes a detailed fact extraction pipeline that processes unstructured material and turns it into structured data before the system reasons over it. That design helps Wexler deal with the disorder of litigation, where timelines blur, people contradict each other, and key details are easy to miss. He also points to the scale of the platform, noting that it handles large document sets and supports work such as deposition preparation, trial preparation, summary judgment briefing, and early case assessment. One of the more striking features is real-time fact checking during depositions, where the platform helps lawyers spot contradictions in testimony as the questioning unfolds. The effect is less like using a search box and more like working with a tireless junior team member who has read the whole file.

Trust, accuracy, and restraint are another major part of the discussion. Mostyn is careful not to oversell what AI can do. He openly states that no system is perfect, yet he argues that Wexler reduces risk by staying inside the record given to it. It does not search the internet, does not drift into outside material, and ties its outputs back to specific text in the source documents. That discipline is important in litigation, where a made-up citation or invented fact is more than embarrassing, it is dangerous. Mostyn presents Wexler as a tool that helps lawyers verify, question, and sharpen their understanding of the case. The result is less time spent slogging through repetitive review and more time spent thinking about how to use the facts in a meaningful way.

The conversation closes on a bigger question about where this kind of technology leads the profession. Mostyn believes that as AI takes on more of the burden of document review and fact development, the value of human lawyering rises in other areas. Strategy, advocacy, witness preparation, courtroom performance, and judgment all become more important when the groundwork is assembled faster and more thoroughly. He also suggests that clients are beginning to care less about how many hours were spent reviewing documents and more about whether their lawyers are prepared, informed, and effective. For listeners interested in litigation, legal AI, and the next stage of law firm economics, this episode offers a thoughtful look at a company betting that the future belongs to tools built for depth, discipline, and the hard realities of dispute work.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | Substack

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript:

Continue Reading From Document Review to Fact Intelligence, Gregory Mostyn on How Wexler.ai Is Reshaping Litigation

The latest episode of The Geek in Review finds Greg Lambert and Marlene Gebauer back from Dallas with a sharp, grounded recap of the Texas Trailblazers conference, an event that stayed close to the daily realities of legal work instead of drifting into glossy predictions. Their conversation centers on a legal industry trying to sort out what AI means right now, in billing, workflow, training, pricing, governance, and client expectations. What stands out most is the hosts’ focus on the practical tension between what the tools are capable of and what law firms and legal departments are structurally ready to absorb.

A major thread in the discussion is the risk of what one speaker called “cognitive surrender,” the habit of trusting AI output too quickly and handing off too much human judgment in the process. Greg and Marlene treat this as less of a software issue and more of a workflow and education issue. The point is not whether AI produces polished work. The point is whether organizations are building systems where review, judgment, and accountability still sit with people. Their conversation ties this concern to legal practice, education, and even K-12 learning, showing how widespread the temptation has become to accept fluent output without enough friction or scrutiny.

The episode also takes a hard look at the pressure AI is putting on the billable hour. Marlene frames the issue well when she notes that AI does not kill the billable hour so much as expose its weaknesses. Across the conference, the hosts heard repeated concern about the mismatch between efficiency gains and the financial structures law firms still rely on. If AI reduces the time needed for many tasks, then firms, associates, pricing teams, and clients all have new incentives to sort through. Greg and Marlene highlight the awkward moment the industry is in, where firms want to talk about value while clients are also eyeing the chance to pay less for faster work. The result is a growing need for honest conversations about pricing, outcomes, and what legal value should mean when time is no longer the cleanest measure.

What gives the episode its energy is the number of concrete examples pulled from the conference. The hosts discuss lower-cost multi-state surveys, large-scale analysis of rights-of-way documents, and internal workflow improvements built with existing tools like SharePoint and Copilot on little or no budget. These stories show AI not as abstract promise, but as a way to get work done that used to be too expensive, too tedious, or too slow to tackle at all. At the same time, Greg and Marlene stay skeptical in the right places, especially when the conversation turns to legal research, citation accuracy, and the idea that technology vendors have somehow solved problems that law librarians and researchers know are stubbornly difficult.

By the end of the episode, the biggest takeaway is not that the legal industry has a clear answer, but that waiting for certainty is no longer a serious option. Greg and Marlene come away from Texas Trailblazers with a sense that real progress is happening through testing, discussion, and repeated adjustment, not through perfect plans. Their recap captures an industry in transition, one where law firms, legal ops teams, vendors, and clients are all feeling the strain between old business models and new technical possibilities. The message is simple and urgent: start the conversations now, use the tools now, and get honest about what must change before the gap between what is possible and what is workable gets even wider.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | Substack

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠
Transcript:

Continue Reading Texas Trailblazers and the Hard Truth About AI in Legal Work

(See Day Two Coverage for the In-House Programs over on The Geek in Review Substack page – GL)

Day One of Texas Trailblazers in Dallas had a different tone than most legal tech conferences I attend. The conversations stayed close to the work. Less speculation, more discussion about what people are doing right now, where it is working, and where it is breaking.

Organized by Cosmonauts in partnership with LegalOps.com and held at The Statler Dallas, the Private Practice Day brought together law firm leaders, legal operations professionals, and technology vendors for a full day of keynotes, panels, and product demos. Joy Heath Rush, CEO of ILTA, chaired the day and opened with a fireside on the new era of conversations between general counsel and outside counsel. Her framing set the tone: AI is living at the intersection of business and technology, and it is creating conversations that simply did not exist two years ago.

Across the sessions and hallway conversations that followed, a few themes kept showing up. They were consistent whether the speaker came from a law firm, an in-house team, or a vendor building the tools.

Trust in AI is becoming a workflow problem

Rowan McNamee, Co-Founder and COO of Mary Technology, opened the sponsor keynote with a point that came up repeatedly throughout the day. AI outputs are persuasive. They read well. They look complete. That creates a tendency to move forward without enough friction in the process.

McNamee cited a recent study on what researchers call “cognitive surrender,” based on the work of Sean Hay and building on the framework in Thinking, Fast and Slow. Under time pressure, people rely on AI even when they know they should verify the result. In a series of experiments, override rates improved from 20% to 42% when participants had real money on the line. Even then, more than half still followed the AI’s answer. “You can reduce it, but you can’t eliminate it just by telling people to be careful,” McNamee noted.

In a legal setting, the implication is straightforward. Review cannot be optional. It has to be built into the process in a way that does not depend on someone remembering to slow down.

That concern echoed later in the day during the panel I moderated. Laura Ewing-Pearle, Senior Manager of eDiscovery and Practice Support Technology at Baker Botts, described a clear split even among e-discovery practitioners who are enthusiastic about AI. They lean heavily toward document interrogation and querying. They are far less comfortable with AI making responsiveness calls, and they are “definitely not comfortable with AI making privilege calls.” The line between assistance and reliance is still being worked out in real time.

Adding urgency to the conversation, panelists referenced a recent New York case in which a court ruled that AI-generated legal advice obtained through a public tool was not protected by attorney-client privilege. The ruling was specific to a consumer-facing chatbot, but the message landed clearly: enterprise AI tools with proper security and licensing are becoming a matter of professional risk management.

Adoption follows incentives, not access

The panel on Culture, Change, and Collaboration brought together Emma Dowden of Burges Salmon, Thom Wisinski of Haynes Boone, Kelley Lugo of Bryan Cave Leighton Paisner, Rowan McNamee, and Tammy Covert of Nachawati. The discussion stayed focused on what actually drives adoption inside organizations.

Continue Reading What I Took Away from Texas Trailblazers

This week we welcome Paula Reichenberg, founder of Neuron, for a sharp and thoughtful conversation about legal translation, artificial intelligence, and what happens when professional expertise collides with tools that look polished but still miss the mark. Paula shares her path from M&A and capital markets law into business school, legal services, machine learning, and finally legal tech entrepreneurship. What started as frustration with inefficiencies inside law firms grew into a translation business, then evolved again as machine translation improved and forced a harder question about survival, adaptation, and quality.

Paula explains how her early company, Hieronymus, found success by handling sensitive, high-stakes legal translations in Switzerland, especially where precision and confidentiality mattered most. But as machine translation improved, the market for average work started to disappear. Clients began doing more on their own, leaving only the hardest, highest-value assignments for specialists. Rather than ignore the shift, Paula leaned into it. That decision led her back to university, into data science and machine learning, and toward building Neuron, a company focused less on replacing expertise and more on improving the process around imperfect AI output.

A central theme of the discussion is the uncomfortable truth that many users do not care as much about excellence as professionals do. Paula makes the point with refreshing honesty. AI often produces work that is mediocre, but for a large share of users, mediocre is enough. That creates both a market shift and a professional dilemma. In legal translation, as in legal drafting more broadly, the issue is rarely whether AI produces something flawless. The issue is whether the user notices what is wrong, has the time to fix it, and has the systems in place to improve the result efficiently. Paula argues that the real value is not in claiming perfection. It is in helping experts find the mistakes faster, correct them with less pain, and avoid wasting hours doing work that feels like cleanup on aisle five.

The conversation also digs into trust, user behavior, and the strange authority people give to AI-generated answers. Paula recounts how, in one negotiation, a party trusted ChatGPT’s answer more than a human tax lawyer’s detailed explanation, even when the AI response was wrong. That anecdote opens up a broader discussion about confidence, presentation, and why polished outputs often feel more persuasive than expert judgment. Greg and Marlene connect that idea to legal systems, translation quality, and access to justice, especially where technology might offer better service than overworked and underfunded human systems. The result is not a simple pro-AI or anti-AI position. It is a grounded look at where human excellence still matters, where automation fills gaps, and where the future may split between mass-market convenience and premium, highly tailored expertise.

Looking ahead, Paula sees consolidation coming to legal tech, along with a growing push toward seamless interfaces that bring best-in-class features into one place. For Neuron, that means becoming an embedded layer inside other legal tools rather than forcing lawyers to juggle yet another standalone platform. Her crystal ball view is both stylish and sobering. She compares the future of legal services to retail and fashion, with more ready-to-wear solutions for everyday needs and a smaller, more exclusive market for bespoke legal work. It is a vivid way to frame what may be coming. The legal industry is not simply moving toward automation. It is sorting itself into tiers of service, quality, and expectation. And if Paula is right, the future belongs to those who understand where “good enough” ends and where true expertise still earns its premium.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | Substack

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠
Transcript

Continue Reading From Translation to Transformation: Paula Reichenberg on AI, Legal Quality, and the Future of Good Enough

This week, we sit down with two guests from Anthropic, Matt Samuels, Senior Product Counsel, and Den Delimarsky, a core maintainer of the Model Context Protocol, or MCP. Together, they unpack why MCP is drawing so much attention across the legal industry and why some are calling it the USB-C for AI. For law firms long burdened by disconnected systems, scattered data, and the infamous integration tax, MCP offers a shared framework for connecting models to the places where real work and real knowledge live, from iManage and Slack to email, data lakes, and internal tools.

Den explains that the promise of MCP is not tied to one model or one vendor. Instead, it creates a standardized way for AI tools to securely interact with many different systems without forcing organizations to build one-off integrations every time they want to connect a new source. The conversation gets especially relevant for legal listeners when Greg and Marlene press on issues like permissions, ethical walls, least-privilege access, and auditability. The answer from Anthropic is reassuring. MCP is built to work with familiar enterprise security concepts such as OAuth and role-based access, meaning firms do not have to throw out their security model in order to explore new AI workflows.

Matt brings the legal and operational lens, translating MCP into practical use cases for lawyers, legal ops teams, and security leaders. He describes how AI becomes far more useful once it has access to the systems lawyers already rely on every day, while still operating within carefully defined administrative controls. The discussion highlights a key shift in how firms should think about AI. This is no longer about asking a chatbot a clever question and getting a polished paragraph back. With MCP, firms are moving toward systems where AI can retrieve, correlate, summarize, draft, and support actions across multiple platforms, all while staying inside the guardrails set by the organization.

The episode also explores how MCP fits into the rise of agentic workflows, apps, plugins, and skills. Rather than treating AI as a static assistant, Anthropic describes a future where these tools become active participants in legal work, pulling together information from multiple sources, helping assemble case timelines, drafting notes into a shared document, and supporting lawyers in a far more integrated workspace. The conversation around skills is especially useful for firms thinking about standard operating procedures, preferred drafting styles, escalation rules, and repeatable work product. Skills and MCP do different jobs, but together they start to look like the operating system for structured legal workflows.

By the end of the conversation, one message comes through clearly. The legal profession is still early in this shift, but the pace is picking up fast. Both Matt and Den encourage listeners to stop treating these tools like abstract future concepts and start experimenting with them now. At the same time, they offer an important note of caution. As much as these systems promise speed and efficiency, lawyers still need to protect the craft of lawyering, their judgment, and the human choices that matter most. For firms trying to make sense of where AI is headed next, this episode offers a grounded and practical look at the infrastructure layer that could shape what comes next.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | Substack

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠
Transcript

Continue Reading Anthropic’s Matt Samuels and Den Delimarsky – Claude & MCP: Building the USB-C for the Legal Tech Stack

This week we welcome back Niki Black to unpack the findings from the newly released 2026 Legal Industry Report from 8am The conversation centers on a legal profession moving into a new phase of AI adoption, where individual lawyers are embracing general purpose AI tools at a striking pace, while many firms still lack even basic policies or training. Niki explains that this disconnect is especially visible among solo, small, and mid-sized firms, where limited resources often slow formal governance even as day-to-day use rises fast.

A major theme of the discussion is the widening gap between personal experimentation and institutional readiness. Niki notes that lawyers are not waiting for permission, and many are already relying on AI to support research, drafting, and routine work. At the same time, firms are struggling to provide guidance, training, and guardrails. The episode highlights the growing risk of shadow AI in legal practice, especially when lawyers and staff turn to unsanctioned tools to keep pace with client demands. For smaller firms, the answer is not elaborate bureaucracy, but practical direction, clear expectations, and a recognition that even a modest policy is better than none.

The conversation also turns to client expectations and the economic pressure AI is placing on the traditional law firm model. Greg and Marlene press Niki on whether firms are truly ready to move away from the billable hour as AI compresses the time needed to complete legal work. Niki argues that large firms face deep structural obstacles because compensation systems, staffing models, and internal economics remain tied to hourly billing. Still, she sees pressure building from in-house counsel, boutique competitors, and smaller firms that use technology to deliver comparable work at lower cost. The result is a market that may resist change, but not escape it.

Another standout part of the episode explores how AI is reshaping access to justice. Niki points to the promise of generative AI as a force multiplier for legal aid lawyers and public defenders, especially when paired with trusted tools and better funding. She rejects the idea that technology alone will solve the justice gap, but makes a strong case that AI, combined with stronger institutional support, helps lawyers serve more people with better results. At the same time, the hosts and Niki acknowledge the risks of a two-tiered system, where wealthier clients benefit from high quality tools while vulnerable users face lower quality, error-prone outputs.

By the end of the episode, the conversation expands from AI tools to a broader structural shift across firms, clients, and law schools. Niki sees the next three to five years as a period of deep change, where pricing, training, competition, and professional expectations all evolve at once. She also shares her own methods for keeping up, including RSS feeds, trusted blogs, and LinkedIn, with a few playful complaints about Substack making life more complicated. The episode leaves listeners with a clear message: the biggest issue is no longer whether AI will affect legal practice. It already is. The real question is whether the profession can adapt fast enough to manage the consequences wisely.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | Substack

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript:

Continue Reading Niki Black on AI Adoption, Billing Pressure, and the Governance Gap in Legal

Anastasia Boyko joins us this week for a wide-angle conversation about AI adoption, leadership, and the uncomfortable truth behind “we are watching what peer firms do.” A Yale-trained tax lawyer with experience spanning Axiom, legal education, and innovation leadership, Boyko argues that precedent-driven instincts are turning into a liability when the underlying rules of the market are shifting in real time.

The episode opens with lessons from the Women + AI 2.0 Summit at Vanderbilt and the “AI competence penalty” narrative. Boyko’s central principle for law firm leaders is simple, stop copying the competition and start operating with intention. Strategic planning matters more than tool shopping, especially when uncertainty makes leaders freeze, over-index on fear, or chase noise instead of outcomes.

From there, the conversation sharpens into client reality. Boyko shares what she is hearing from in-house leaders, and it is not comforting for firms. Legal departments are working to reduce dependence on outside counsel, business partners inside companies often accept “good enough,” and the models keep improving. The risk is not losing to a peer firm; it is losing the client relationship because the work stops feeling necessary.

A major theme is talent and the apprenticeship gap. Boyko argues firms underinvest in people, even as they spend aggressively on software stacks. AI can help junior lawyers with coaching and confidence, but it does not replace mentorship, judgment-building, or context. The skills that matter now include client advisory, operational thinking, critical judgment, and the ability to solve problems across a complex system, not only perform discrete tasks in a vacuum.

The episode closes on legal education and the future value of the JD. Boyko urges students to be selfish about learning AI, especially when faculty guidance comes from avoidance or philosophy rather than experimentation. Looking ahead, she predicts the JD’s value shifts upward, away from rote production and toward proactive advisory work, relationships, anticipatory counsel, and wisdom-driven judgment. In other words, fewer fire drills, more looking around corners.

Links:

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | Substack

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript:

Continue Reading Anastasia Boyko on Advisor Mode, Training Lawyers for the Post-Pyramid Firm

This week we go “talk show mode” for a special episode where Marlene recaps her trip to the Women + AI 2.0 Summit at Vanderbilt Law, hosted by Cat Moon, and shares why the event felt different from the standard conference grind, more energy, more structure, and yes, a DJ.

The summit’s core focus sits right on a tension point in the wider AI conversation. There’s a persistent narrative that women use AI less than men. Cat Moon’s framing, if it’s true, it’s a problem, and if it’s false, it’s also a problem, sets the tone for a day built around participation and peer connection. The format uses “spark” cards, mini, midi, and maxi prompts, to push attendees into small conversations, deeper reflection, and a final takeaway.

Marlene also highlights sobering research shared during the opening, including an “AI competence penalty” dynamic where identical work is judged differently depending on whether evaluators believe a man or a woman used AI. The discussion lands on why these biases matter inside legal workplaces, and what leaders and peers can do to reduce the social cost of being open about AI usage.

Interspersed throughout are short interviews with attendees and speakers. Nicole Morris (Emory) captures the day’s purpose, expanding AI knowledge, talking risks, and connecting across roles. Sabra Tomb (University of Dayton School of Law) reframes AI as a leadership amplifier, moving from day-to-day management overload toward strategy and vision. Adele Shen (Vanderbilt) offers a funny but sharp taxonomy of AI “experts,” including “technocratic oracles,” “extinction alarmists,” and “touch grass humanists,” which sparks a candid side conversation about self-promotion, authority vibes, and who becomes “the story” in AI discourse.

The episode closes with a look at how education and training can work better. Marlene and Greg lean into peer show-and-tell sessions, leadership modeling, and safe spaces, both governance-safe and learning-safe. A two-person segment from Suffolk Law (Chanal Neves McClain and Dyane O’Leary) adds a teaching twist, integrating AI tools into skills instruction without isolating “AI week” from real lawyering judgment. The final note comes from Stephanie Everett (Lawyerist) on the power of stories, and the reminder that people do not need to internalize the narrative someone else hands them.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | Substack

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript

Continue Reading Women + AI Summit, Real Talk: Leadership, Learning, and Not Letting “The Trap” Write Your Story