I have a prediction that I want to share with you. This is something that I envision happening just a few short weeks from now. I imagine seeing an associate at a law firm doing something that will make every product manager at Thomson Reuters and LexisNexis choke on their morning coffee. She has a contract dispute question. A Real one. There will be a partner waiting. And the clock is ticking.

She won’t open Westlaw. She won’t open Lexis. She won’t open her browser at all.

She types her question into a work-approved AI chat window. Twenty-four minutes later she has a memo, citations included, sent off to the partner. Entered 0.4 hours on her time entry system. And she is done.

The KeyCite red flag that Thomson Reuters spent generations building? Never saw it. The Shepard’s signal? Didn’t see that either. The annotated treatise hierarchy that some editor in Eagan, Minnesota agonized over? Came through as a flat blob of text in a JSON response that the model summarized into a single sentence.

Don’t get me wrong. Westlaw and Lexis were in the research process. She just didn’t noticed they were there. And that, friends, is the near-future I want to talk about.

I’ve been privately calling this “Shadow UX.” Think of it as the user-experience cousin of Shadow IT. We all know the effects of Shadow IT, right? That was when the marketing team started using Dropbox without telling the IT department, and three years later IT realized the entire company’s roadmap was sitting on someone’s personal account. Shadow UX is the same thing at the interface layer. An unauthorized layer sitting between the user and the vendor’s product, and the vendor doesn’t control it, doesn’t design it, and increasingly doesn’t even know it exists.

For legal information vendors, the Shadow UX layer is mostly an LLM with a few tool calls bolted on. There are other versions out there too: browser extensions that re-skin search results, paralegals building Notion dashboards off APIs, scraping wrappers feeding firm intranets. The AI agent is the one eating everyone’s lunch though.

Here’s why I’ve been thinking about Shadow UX so much lately.

For thirty years vendors competed on the browser/dashboard. The fancy charts. The little visual icons. The hover states. Pixel-perfect interfaces designed for a human eye scanning a screen. In 2026, the user is increasingly something else. It’s a model reading a JSON schema at inference time. If you’ve optimized your product for an audience that’s becoming the minority of your traffic, you’re going to find out the hard way.

OK so why now? Three things had to happen at the same time, and they all did within about eighteen months.

First, the models actually got good. The 2024 models couldn’t handle jurisdictional nuance. The 2026 models draft memos that pass partner review. Not every time, sure, though often enough that associates are using them anyway.

Next, the billable hour math became impossible to ignore. We bill in six minute increments. Any tool that turns a ninety minute task into nine minutes is going to get used, with or without IT’s blessing. (Sound familiar? Hello again, Shadow IT.)

And Finally, the Model Context Protocol showed up. MCP is the part of this story that doesn’t get enough attention. Imagine if every database, every research platform, every internal wiki spoke a common language to AI agents. That’s MCP. Companies like NetDocuments and Midpage adopted it. Specialized vendors are rolling out MCP servers for everything from patent search to legislative tracking. Once the protocol got standardized, the vendor’s UI stopped being a moat and started being a speed bump.

Now here’s the part that should worry legal information providers. The editorial work that built these companies, the headnotes, the Key Number system, KeyCite, Shepard’s, all of that gets flattened.

In a portal, a KeyCite red flag is loud. It’s red. It’s literally a flag. You see it before you see anything else on the page. In the Shadow UX layer, it’s a token in a JSON field. If the model’s summarization logic doesn’t promote it, the user never sees it. The signal is technically still there. It’s just invisible.

The headnote tree is worse. Editors spent generations nesting these things to show legal relationships. Models hate hierarchies. They flatten them into bullet lists, or worse, into prose. The categorical context disappears.

And then there’s the provenance problem, which is the one that actually worries me. When an agent synthesizes ten cases into one paragraph, the user gets a confident narrative. They don’t see that eight came from KeyCite-validated sources and two came from a sketchy public database the model decided to trust. The vendor’s brand was always the proxy for “this is reliable.” When the brand is invisible, the proxy is gone.

I’ll put it bluntly. If you’re a research vendor, your brand value is currently being laundered through someone else’s chat interface, and you’re not getting credit for it.

The pricing model is the other shoe about to drop.

Seat-based pricing is the deal we’ve all lived with since the 90s. You pay per lawyer. The lawyer logs in. Everybody understands. Now… an AI agent doesn’t log in. It doesn’t have a seat. It can do the work of fifteen associates in an afternoon though. So vendors are watching seat counts flatten while their compute costs spike. The infrastructure bill goes up while the revenue line goes sideways. That’s not a sustainable shape.

The industry is wobbling toward usage-based and outcome-based pricing. Pay per query. Pay per resolved research task. Pay per drafted clause. Salesforce and Zendesk are already doing this in their own categories. The math makes sense for vendors. The problem is that law firms hate metered bills. CIOs cite cost forecasting as the number one headache with consumption pricing. Nobody wants their Westlaw bill to look like an AWS invoice.

Here’s where the real fight is going to happen, and I haven’t seen anybody talk about it openly yet.

Put yourself in the chair of a Westlaw or Lexis sales VP. You’re watching seat utilization drop. Associates are logging in less. Partners barely log in at all. The minutes-per-seat metric you’ve been using internally to justify renewals is collapsing. Meanwhile your compute costs are spiking because the firm’s MCP-connected agents are hammering your APIs and MCPs at three in the morning to draft research memos.

What do you do?

I’ll tell you what you do. You add an AI agent access fee on top of the seat license. Premium tier. “Enterprise agentic access.” Whatever the marketing team lands on. And you keep raising the per-seat price every renewal cycle. Because if each seat is getting cheaper for the firm to actually use, your only path to flat or growing revenue is to charge more for each one. Double dip. Seats plus agents. Stack them.

Now flip the chair. You’re a firm CIO or a law firm library director. Your usage data shows seat logins dropping. Your associates are no longer going directly to Westlaw or Lexis. The vendor calls to renew, the price per seat is up 10%, and now there’s a separate line item for “agent access” that wasn’t on last year’s quote. You ask why you’re paying more for less. The vendor explains, with a straight face, that the value sits in the data, the agent extracts more value per query, and the bill reflects that. You disagree.

That’s the battle.
Continue Reading Shadow UX and the Upcoming Fight over Legal Research

This week we welcome Paula Reichenberg, founder of Neuron, for a sharp and thoughtful conversation about legal translation, artificial intelligence, and what happens when professional expertise collides with tools that look polished but still miss the mark. Paula shares her path from M&A and capital markets law into business school, legal services, machine learning, and finally legal tech entrepreneurship. What started as frustration with inefficiencies inside law firms grew into a translation business, then evolved again as machine translation improved and forced a harder question about survival, adaptation, and quality.

Paula explains how her early company, Hieronymus, found success by handling sensitive, high-stakes legal translations in Switzerland, especially where precision and confidentiality mattered most. But as machine translation improved, the market for average work started to disappear. Clients began doing more on their own, leaving only the hardest, highest-value assignments for specialists. Rather than ignore the shift, Paula leaned into it. That decision led her back to university, into data science and machine learning, and toward building Neuron, a company focused less on replacing expertise and more on improving the process around imperfect AI output.

A central theme of the discussion is the uncomfortable truth that many users do not care as much about excellence as professionals do. Paula makes the point with refreshing honesty. AI often produces work that is mediocre, but for a large share of users, mediocre is enough. That creates both a market shift and a professional dilemma. In legal translation, as in legal drafting more broadly, the issue is rarely whether AI produces something flawless. The issue is whether the user notices what is wrong, has the time to fix it, and has the systems in place to improve the result efficiently. Paula argues that the real value is not in claiming perfection. It is in helping experts find the mistakes faster, correct them with less pain, and avoid wasting hours doing work that feels like cleanup on aisle five.

The conversation also digs into trust, user behavior, and the strange authority people give to AI-generated answers. Paula recounts how, in one negotiation, a party trusted ChatGPT’s answer more than a human tax lawyer’s detailed explanation, even when the AI response was wrong. That anecdote opens up a broader discussion about confidence, presentation, and why polished outputs often feel more persuasive than expert judgment. Greg and Marlene connect that idea to legal systems, translation quality, and access to justice, especially where technology might offer better service than overworked and underfunded human systems. The result is not a simple pro-AI or anti-AI position. It is a grounded look at where human excellence still matters, where automation fills gaps, and where the future may split between mass-market convenience and premium, highly tailored expertise.

Looking ahead, Paula sees consolidation coming to legal tech, along with a growing push toward seamless interfaces that bring best-in-class features into one place. For Neuron, that means becoming an embedded layer inside other legal tools rather than forcing lawyers to juggle yet another standalone platform. Her crystal ball view is both stylish and sobering. She compares the future of legal services to retail and fashion, with more ready-to-wear solutions for everyday needs and a smaller, more exclusive market for bespoke legal work. It is a vivid way to frame what may be coming. The legal industry is not simply moving toward automation. It is sorting itself into tiers of service, quality, and expectation. And if Paula is right, the future belongs to those who understand where “good enough” ends and where true expertise still earns its premium.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | Substack

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠
Transcript

Continue Reading From Translation to Transformation: Paula Reichenberg on AI, Legal Quality, and the Future of Good Enough

(How to Create a Claude Skill or Plugin for Law and Use It in Claude Cowork)

On February 3, 2026, a single product announcement from Anthropic wiped approximately $285 billion in market capitalization off the stock market in a single trading day. Thomson Reuters dropped 16%. LegalZoom cratered nearly 20%. RELX, the parent company of LexisNexis, fell 14%. Wolters Kluwer lost 13%. The London Stock Exchange Group plunged 8%. The contagion spread to Salesforce, ServiceNow, FactSet, and dozens of other enterprise software companies. Bloomberg called it a “$285 billion rout.” Analysts started calling it the “SaaSpocalypse.”

The catalyst? Anthropic released a set of open-source plugins for its Claude Cowork tool. One of them was a legal plugin that could review contracts, flag risks, triage NDAs, and track compliance. Investors took one look and decided the entire SaaS business model was cooked.

Here’s the part that still gets me: the legal skill at the core of this market panic is, at its heart, a structured markdown file. We’re talking about roughly 250 lines of well-organized pseudo-code, plain text instructions that tell Claude how to think about legal workflows. No compiled binaries. No proprietary algorithms. No infrastructure. Just markdown. And it’s included in the $20/month Claude Pro subscription. That’s less than most attorneys spend on lunch. The full Business Insider breakdown of the stock carnage is here: https://www.businessinsider.com/anthropic-cowork-legal-plugin-publishing-stocks-legalzoom-thomson-reuters-relx-2026-2

So What Are Claude Skills & Plugins, Exactly?

A Claude Skills are a set of instructions, typically written in markdown, that teaches Claude how to approach a specific domain or task in a repeatable, structured way. Think of it as giving Claude a playbook. Instead of prompting from scratch every time you need a contract reviewed or a compliance check run, you write the instructions once, and Claude follows them consistently. Skills include things like domain knowledge (“here’s how our firm handles risk assessment”), step-by-step workflows (“when reviewing an NDA, check these clauses in this order”), and output formatting (“flag issues as green/yellow/red with recommended language”). They’re stored as plain files, organized in folders, and loaded automatically when relevant.  Claude Plugins are a bundle that packages one or more skills together with slash commands, MCP connectors, and sub-agents into a single installable unit, while a skill is just a standalone markdown file that teaches Claude how to handle a specific task or domain.

The important thing to understand is that you don’t have to write these from scratch. You can work with Claude itself to build skills and plugins. Describe what you want the workflow to do, what your firm’s processes look like, what outputs you need, and Claude will help you draft, test, and refine the skill. Anthropic has also open-sourced eleven starter plugins on GitHub (https://github.com/anthropics/knowledge-work-plugins/tree/main/legal) that you can customize. The barrier to entry here is remarkably low. If you can write a clear memo, you can write a skill.

Now Let’s Talk About Claude Cowork

Claude Cowork is Anthropic’s desktop tool that takes Claude out of the chat window and puts it to work on your actual files. You point it at a folder on your computer, give it a task, and it plans, executes, and iterates through multi-step workflows on its own. It can read documents, create new files, delete files, organize folders, and coordinate multiple workstreams. If you’ve used Claude Code (the developer-facing terminal tool), Cowork is the same engine with a friendlier interface. It’s essentially Claude Code for laptop professionals who don’t want to learn code.  It is currently only available in iOS but Claude recently launched the Claude app for PC, so we’re hopeful to see them in that soon.

What makes Cowork interesting for legal professionals is that it doesn’t just respond to one prompt at a time. You set a goal, and Claude works through it like a (very fast, very thorough) junior associate. It asks for clarification when it needs it, saves outputs directly to your file system, and loops you in on its progress. The plugin system takes this further: install a legal plugin, and Cowork immediately knows your firm’s preferred tone, your risk tolerances, your clause library, and your review workflow, what files to reference as template, etc. It’s a configurable work environment; no more cutting and pasting required.

One more thing worth noting, because I think it says something profound about where we are: Cowork was built entirely by Claude Code in approximately 10 days. An AI coding agent built its own non-technical sibling, and that sibling then crashed the legal tech market. 2026 is going to be wild.

Practical Examples: Skills for Lawyers and Legal Academics

Enough theory; let’s build something. Below are four practical examples, two for practicing attorneys and two for legal academics, showing how you could combine Claude Skills and Cowork to create custom workflows. Each example includes the actual markdown you’d use to create the skill.  Markdown is still human-readable, it just gives useful formatting to the text for an LLM.  Basically, any complex prompting I do these days is in markdown.

For Lawyers

  1. Client Intake and Conflict Check Workflow

Continue Reading How to Crash the Legal Tech Market

A fresh Anthropic announcement set off a week of market jitters and existential questions: what happens when the big model shops ship “legal productivity” features and the public markets flinch. This week, we bring Otto von Zastrow back for a rapid-response conversation, with a front-row view from New York and a blunt take: software grows cheaper to reproduce, so value migrates. The discussion lands on a key distinction, interface versus data, and why the old guard still holds leverage even as new entrants sprint.

From there, the conversation zooms in on “systems of record” and the uneasy truth that the safest vault often loses mindshare when a new interface sits on top. Otto points to email, calendar, SharePoint, DMS platforms, and the growing power of a single chat workspace to become the place where work happens. The hosts press on a critical nuance for lawyers: legal research data is not flat, and “good law” demands hierarchy, treatment, and reliable citation context, not a pile of cases plus vibes.

Otto frames Midpage.ai as a data company first, built on continuous court ingestion plus normalization that used to demand armies of editors. He argues AI turns messy inputs into structured repositories at a scale that favors speed and breadth, yet accuracy still requires process design and verification loops. Greg sharpens the point for litigators: the bar is not clever answers, the bar is defensible citations, negative treatment, and confidence that the record matches reality. Otto agrees on the need for trust, then flips the lens: many annotation tasks look like grind work where modern models, paired with strong QA, start to outperform large manual pipelines.

The headline feature is integration via Model Context Protocol, described as a USB-C style connector for tools and models. Midpage chose distribution inside Claude and ChatGPT rather than forcing lawyers into yet another standalone site. Otto explains the wager: lawyers want fewer surfaces, and general chat platforms ship features at a pace no niche vendor matches alone, so the smart move is to meet users where daily work already lives. The demo story centers on research inside chat, with Midpage returning real case links and citations, then letting the user push deeper with uploads and follow-on tasks, while keeping verification one click away.

The back half turns to second-order effects: pricing, agent spend, and the rise of “vibe” work where professionals act more like managers of agent teams than sole authors of first drafts. Marlene raises governance and liability when internal DIY tools pop up outside formal review, and Otto predicts a pendulum toward professionalized deployment plus change management. The conversation closes on Midpage’s “holy grail” topic, citators and the case relationship graph, plus a clear-eyed forecast: standalone research websites shrink as a primary workspace, while research becomes groundwork performed by agents, with lawyers spending more time interrogating results than running searches.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | Substack

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript

Continue Reading Midpage Goes Native: Legal Research Inside Claude and ChatGPT, with Otto von Zastrow

In this episode of The Geek in Review, we welcome back Pablo Arredondo, VP of CoCounsel at Thomson Reuters, along with Joel Hron, the company’s CTO. The conversation centers on the recent release of ChatGPT-5 and the rise of “reasoning models” that go beyond traditional language models’ limitations. Pablo reflects on his years of tracking neural net progress in the legal field, from escaping “keyword prison” to the current ability of AI to handle complex, multi-step legal reasoning. He describes scenarios where entire litigation records could be processed to map out strategies for summary judgment motions, calling it a transformative step toward what he sees as “celestial legal products.”

Joel brings an engineering perspective, comparing the legal sector’s AI trajectory to the rapid advancements in AI developer tools. He notes that these tools have historically amplified the skills of top performers rather than leveling the playing field. Applied to law, he believes AI will free lawyers from rote work and allow them to focus on higher-value decisions and strategy. The discussion shifts to Deep Research, Thomson Reuters’ latest enhancement for CoCounsel, which leverages reasoning models in combination with domain-specific tools like KeyCite to follow “breadcrumb trails” through case law with greater accuracy and transparency.

The trio explores the growing importance of transparency and verification in AI-driven research. Joel explains how Deep Research provides real-time visibility into an AI’s reasoning path, highlights potentially hallucinated citations, and integrates verification tools to cross-check references against authoritative databases. Pablo adds historical and philosophical perspective, likening hallucinations to a tiger “going tiger,” stressing that while the risk cannot be eliminated, the technology already catches a significant number of human errors. Both agree that AI tools must be accompanied by human oversight and well-designed workflows to build trust in their output.

The conversation also delves into the challenges of guardrails and governance in AI. Joel describes the balance between constraining AI for accuracy and keeping it flexible enough to handle diverse user needs. He introduces the concept of varying the “leash length” on AI agency depending on the task—shorter for structured workflows, longer for open-ended research. Pablo challenges the legal information community to break down silos between disciplines like eDiscovery, research, and litigation, envisioning a unified information ecosystem that AI could navigate seamlessly.

Looking to the future, Joel predicts that the adoption of AI agents will reshape organizational talent strategies, elevating the importance of those who excel at complex decision-making. Pablo proposes “ambient AI” as the next frontier—intelligent systems that unobtrusively monitor legal work, flagging potential issues instantly, much like a spellchecker. Both caution that certain legal tasks, especially in judicial opinion drafting, warrant careful consideration before fully integrating AI. The episode closes with practical insights on staying current, from following AI researchers on social platforms to reading technical blogs and academic papers, underscoring the need for informed engagement in this rapidly evolving space.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

Blue Sky: ⁠@geeklawblog.com⁠ ⁠@marlgeb⁠
⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Guest’s Go-To Resources:

Blue Sky: ⁠@geeklawblog.com⁠ ⁠@marlgeb⁠
⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript

Continue Reading Pablo Arredondo and Joel Hron on Reasoning Models, Deep Research, and the Future of Legal AI

This week, we welcome longtime friend and legal tech veteran Ken Crutchfield, founder of Spring Forward Consulting. Ken brings his extensive experience from major legal information vendors like Thomson Reuters, Bloomberg, and Wolters Kluwer into a timely and candid discussion about the current phase of artificial intelligence in the legal industry. Comparing today’s generative AI surge to the American Industrial Revolution, Ken describes this moment as the “Wild West” era—full of promise, hype, overinvestment, and, critically, few rules.

Drawing historical parallels to railroads, oil barons, and steel magnates, Ken illustrates how unchecked growth and technological innovation can outpace regulation until market forces or policy catch up. He notes the resurgence of large-scale infrastructure investment, now not in steel or steam, but in compute power and data centers. Just as J.P. Morgan helped stabilize chaotic markets in the 19th century, Ken suggests today’s AI frontier needs a similar recalibration, and possibly new rules of engagement.

The conversation shifts toward the practical realities of legal tech adoption. Ken emphasizes that law firms’ expectations of perfection often collide with startups’ resource limitations. Vendors need to rethink how they engage with firms by building credibility, focusing on integration, and delivering actual use-case wins. Firms, in turn, must move beyond the billable hour mindset and consider new metrics like Return on Experience. Adoption is no longer optional, it’s strategic, competitive, and increasingly client-driven.

Ken also unpacks the looming implications of content rights and data ownership in the age of AI. If firms aren’t investing in data hygiene now, they risk being left behind when more sophisticated AI tools demand clean, structured, and secure datasets. AI isn’t just about automating workflows, it’s about being ready to plug into a future where interoperability, metadata, and permissions will dictate who thrives and who gets leapfrogged.

Finally, Ken calls for scenario planning: not just reacting to what OpenAI or Anthropic might do next, but anticipating it. Firms and vendors alike should double down on what works, define success before launching new projects, and invest in meaningful adoption strategies. In a world moving this fast, it’s no longer about who gets there first, it’s about who gets there with a plan.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

Blue Sky: ⁠@geeklawblog.com⁠ ⁠@marlgeb⁠
⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript

Continue Reading The Wild West of AI: A Legal Tech Reckoning with Ken Crutchfield

In this episode of The Geek in Review, hosts Marlene Gebauer and Greg Lambert sit down with Otto von Zastrow, the founder and CEO of MidPage.AI, an AI-native legal research platform. With a recent $4 million seed round and an ambitious mission to rival legacy research tools, MidPage is drawing attention across the legal industry. Otto shares his unconventional journey from AI-powered lawn robotics to transforming how litigators interact with case law. His pivot into legal tech was fueled by a combination of technical curiosity, the rise of language models, and firsthand insight from his lawyer friends overwhelmed by inefficient research workflows.

Otto walks listeners through the core of MidPage’s offering, which includes the usual suspects—case law, statutes, regulations—but with a twist: smarter search tools, intuitive UI, and features like a proprietary citator and their newly launched Proposition Search. This feature aims to solve the long-standing “needle-in-a-haystack” problem by surfacing judicial language that matches precise arguments, accompanied by contextual metadata and filters. Otto highlights that the goal isn’t just to match or mimic tools like Lexis or Westlaw, but to rethink what legal research should feel like when modern AI capabilities are built in from the ground up.

One of the more unique aspects of MidPage’s product development is their internal “kangaroo court”—a monthly teamwide challenge where employees, regardless of role, must conduct legal research using MidPage or traditional tools. Otto notes that this process not only improves product design but builds real empathy for the user experience. Engineers and designers are encouraged to think like litigators, helping identify pain points and close functionality gaps. As a result, the product continually evolves based on firsthand user scenarios, not just speculation.

The episode also delves into the data-side challenges that have historically prevented innovation in legal research. Otto explains why now—thanks to improved AI models and open access to data—is a rare inflection point for startups. He emphasizes the strategic importance of MidPage building its own case law dataset to avoid being beholden to incumbents. This independence allows them to innovate more freely, enhance precision, and lay the groundwork for broader API access that could empower the next generation of legal tech tools.

Finally, the conversation looks ahead. Otto predicts that AI will amplify the capabilities of individual lawyers, enabling them to process more data at greater depth. In a world where clients are increasingly self-educating with tools like ChatGPT, MidPage aims to provide lawyers with the means to maintain credibility and efficiency while ensuring accuracy. As AI models grow more capable and agentic, Otto sees an evolution not just in how legal research is conducted, but in how lawyers interact with knowledge, data, and ultimately their clients.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

Blue Sky: ⁠@geeklawblog.com⁠ ⁠@marlgeb⁠
⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript

Continue Reading Otto von Zastrow on MidPage.AI and the Future of AI-Powered Legal Research

In this special episode of The Geek in Review, we take the new ChatGPT Advanced Voice Mode for a spin, inviting it to analyze and discuss all 23 episodes from the podcast’s 2025 season. The episode kicks off with a high-level overview of the biggest legal tech themes from the year so far. ChatGPT Voice quickly identifies a significant shift toward agentic AI tools—those that go beyond automation to become integrated partners in the legal workflow. These tools are helping firms reimagine service delivery, improve access to justice, and rethink the very structure of their businesses.

Throughout the episode, the trio explores consistent trends shared by legal tech leaders in recent episodes. These include the integration of AI into core legal tasks, such as contract review and litigation support; the rise of new business models like value-based pricing; and the ongoing focus on ethical AI use. Specific guests like Feargus MacDaeid and Nnamdi Emelifeonwu (Definely), Atena Reihani (ContractPodAI), and Raghu Ramanathan (Thomson Reuters) are spotlighted for their insights into embedding AI directly into lawyers’ existing toolsets to streamline and elevate legal workflows.

The conversation then turns to the importance of human oversight in maintaining trust and legality as AI becomes more embedded in legal systems. ChatGPT Voice references Garfield AI’s regulated model and various RAG-based solutions to illustrate how combining AI efficiency with human judgment creates responsible innovation. The emergence of AI-native law firms and more flexible pricing models reflects an industry on the cusp of transformation, driven by both technological advancement and client-centered thinking.

Marlene and Greg also take a moment to reflect on the human stories behind the tech. They highlight episodes featuring guests like Laura Clayton McDonald, Kenzo Toshima, Wendy Jepsen, and Gabriela Izturiz, who bring servant leadership, change management, behavioral science, and personal purpose into their work. These conversations remind us that innovation in legal tech is as much about people and values as it is about platforms and code.

To close out the episode, the hosts pose their signature “crystal ball” question. ChatGPT predicts the legal tech breakthrough of 2025 will be the mainstream adoption of agentic AI systems that proactively support legal professionals in real time. It also shares that its favorite episode was the one featuring Garfield AI and their bold vision of a fully AI-powered law firm handling small claims—a true glimpse of the future. Whether you’re curious about cutting-edge workflows or inspired by legal professionals integrating their personal passions into practice, this episode captures a compelling snapshot of where legal tech is headed.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

Blue Sky: ⁠@geeklawblog.com⁠ ⁠@marlgeb⁠
⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript
Continue Reading What Does ChatGPT Think of Our 2025 Episodes? We Ask ‘Her’

On this episode of The Geek in Review, we welcome Philip Young, co-founder and CEO of Garfield AI, the first AI-powered law firm approved for practice by the UK’s Solicitors Regulation Authority (SRA). The episode kicks off with a discussion of recent stories that explore AI’s evolving role in legal proceedings, such as avatars testifying in court and the ethical challenges that arise when deepfakes and synthetic personas enter the legal process. Philip, a seasoned litigator and technologist, draws from his 25 years of legal experience to weigh in on the potential and perils of AI-driven courtrooms, emphasizing the importance of authenticity and trust in legal proceedings.

Young shares the backstory behind Garfield AI, which was inspired by a real-world problem faced by his brother-in-law, a plumber who struggled to recover small debts from non-paying clients. Seeing an opportunity to help small businesses navigate the small claims process efficiently, affordably, and with minimal friction, Philip set out to build a system that mirrors what a traditional law firm would do—without the high cost or time burden. Garfield reads invoices and contracts, verifies the legitimacy of claims, guides users through pre-action letters, claim filings, and even court preparation, all while remaining compliant with UK legal standards.

One of the most unique features of Garfield AI is its dual design: it serves both pro se claimants and can be white-labeled for use by traditional law firms. Young explains how legal professionals can integrate Garfield into their workflows, using it to generate documents under their own branding while Garfield handles the backend. This hybrid approach provides flexibility for users, whether they prefer a self-service platform or seek a human-in-the-loop experience. Garfield’s early success has sparked interest across the legal spectrum—from solo practitioners to regulatory bodies—demonstrating that AI can support, rather than displace, the legal profession.

The conversation also delves into Garfield’s journey to regulatory approval. Young describes the rigorous process of working with the SRA, ensuring the platform aligned with legal duties to clients and the courts. He highlights the importance of maintaining accountability and explains how Garfield was rolled out cautiously, with layers of human oversight and a roadmap toward data-driven, risk-based review. With increasing inquiries from international regulators and courts, Young sees the platform as a potential blueprint for improving access to justice beyond the UK, although he notes that success depends on a supportive regulatory environment, judicial openness, and sufficient technological infrastructure.

Beyond the tech, the episode emphasizes the human element of law. Young passionately advocates for AI as a tool that enhances legal practice rather than replaces it—freeing lawyers from mundane tasks and enabling them to focus on strategy, advocacy, and client care. He shares his hope that Garfield AI and similar innovations will close the access-to-justice gap by enabling small-value claims to be pursued cost-effectively and fairly. As he notes, AI may never replace the human lawyer’s emotional intelligence and presence in court, but it can certainly help more people get there.

To learn more about Garfield AI and its innovative approach to legal automation, listeners can visit www.garfield.law. This episode is a must-listen for anyone interested in the intersection of law, technology, and the future of justice. As always, the podcast ends on a warm note with music by Jerry David DeCicca, underscoring a thought-provoking conversation that blends legal tradition with the tech of tomorrow.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

Blue Sky: ⁠@geeklawblog.com⁠ ⁠@marlgeb⁠
⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript

Continue Reading Philip Young of Garfield AI: The World’s First AI Law Firm Gets the Green Light

This week, we sit down with Kenzo Tsushima, Managing Director of Mind Factory at Morae, to discuss how AI is transforming legal operations and consulting services. Kenzo shares his unique career journey, blending a passion for technology with legal expertise, and highlights why the legal industry is positioned to leverage AI advancements more quickly than heavily regulated sectors like healthcare. With a background that spans consulting leadership and GC roles, Kenzo offers a rare dual perspective on how law firms and corporate legal departments can future-proof themselves by embracing emerging technologies like MorAI, Morae’s proprietary AI platform.

Kenzo discusses the creation of MorAI, launched in mid-2023, as a response to widespread legal tech “decision fatigue” — where an abundance of AI tools overwhelms buyers. Rather than pushing generic solutions, Morae designed MorAI around highly specific legal workflows such as contract review, RFP response automation, and internal helpdesk queries. Kenzo emphasizes the importance of “solutionizing” AI: showing real, targeted results rather than relying on hype. Using examples like their Helpdesk module, Kenzo explains how legal teams can instantly boost efficiency by querying historical RFP responses and deploying AI for natural language document reviews, significantly reducing administrative burdens across legal and procurement functions.

A strong advocate for servant leadership and human-centric AI adoption, Kenzo outlines how Morae’s approach goes beyond technology — focusing heavily on change management and upskilling legal professionals. Through programs like SEEDS (Skill Enablement Employee Development Series), Morae invests in developing both consulting and technology skills among its team. Kenzo notes that traditional legal professionals, often unfamiliar with public speaking or technology tools, can thrive when given structured, bite-sized learning opportunities. This consultative-first mindset, he argues, not only improves client outcomes but creates a more resilient and engaged workforce.

Addressing cybersecurity and data privacy concerns, Kenzo details Morae’s use of private Azure instances and multiple legally trained LLMs to ensure client data security and confidentiality. Unlike public AI tools, MorAI is designed to be a trusted legal companion that never co-mingles client data or trains on external internet content. Kenzo also explains why Morae’s strategy of multi-LLM deployment (leveraging OpenAI, Anthropic, and others) future-proofs clients against rapid developments in AI models — ensuring their legal technology stacks remain agile and powerful over time.

Finally, Kenzo shares his insights on the challenges ahead for the legal industry: decision fatigue, resistance to change, and the crucial need to align with younger generations’ expectations around technology use. He urges law firms and corporate legal departments to rethink build-vs-buy strategies, embrace commercially available solutions, and foster AI champions within their organizations. As new roles like legal engineers and prompt engineers emerge, firms that support AI-enabled upskilling and servant leadership will not just survive — they will lead the next era of legal innovation.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

Blue Sky: ⁠@geeklawblog.com⁠ ⁠@marlgeb⁠
⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript

Continue Reading Kenzo Tsushima of Morae on Innovation, Change Management, and Servant Leadership