I have a prediction that I want to share with you. This is something that I envision happening just a few short weeks from now. I imagine seeing an associate at a law firm doing something that will make every product manager at Thomson Reuters and LexisNexis choke on their morning coffee. She has a contract dispute question. A Real one. There will be a partner waiting. And the clock is ticking.

She won’t open Westlaw. She won’t open Lexis. She won’t open her browser at all.

She types her question into a work-approved AI chat window. Twenty-four minutes later she has a memo, citations included, sent off to the partner. Entered 0.4 hours on her time entry system. And she is done.

The KeyCite red flag that Thomson Reuters spent generations building? Never saw it. The Shepard’s signal? Didn’t see that either. The annotated treatise hierarchy that some editor in Eagan, Minnesota agonized over? Came through as a flat blob of text in a JSON response that the model summarized into a single sentence.

Don’t get me wrong. Westlaw and Lexis were in the research process. She just didn’t noticed they were there. And that, friends, is the near-future I want to talk about.

I’ve been privately calling this “Shadow UX.” Think of it as the user-experience cousin of Shadow IT. We all know the effects of Shadow IT, right? That was when the marketing team started using Dropbox without telling the IT department, and three years later IT realized the entire company’s roadmap was sitting on someone’s personal account. Shadow UX is the same thing at the interface layer. An unauthorized layer sitting between the user and the vendor’s product, and the vendor doesn’t control it, doesn’t design it, and increasingly doesn’t even know it exists.

For legal information vendors, the Shadow UX layer is mostly an LLM with a few tool calls bolted on. There are other versions out there too: browser extensions that re-skin search results, paralegals building Notion dashboards off APIs, scraping wrappers feeding firm intranets. The AI agent is the one eating everyone’s lunch though.

Here’s why I’ve been thinking about Shadow UX so much lately.

For thirty years vendors competed on the browser/dashboard. The fancy charts. The little visual icons. The hover states. Pixel-perfect interfaces designed for a human eye scanning a screen. In 2026, the user is increasingly something else. It’s a model reading a JSON schema at inference time. If you’ve optimized your product for an audience that’s becoming the minority of your traffic, you’re going to find out the hard way.

OK so why now? Three things had to happen at the same time, and they all did within about eighteen months.

First, the models actually got good. The 2024 models couldn’t handle jurisdictional nuance. The 2026 models draft memos that pass partner review. Not every time, sure, though often enough that associates are using them anyway.

Next, the billable hour math became impossible to ignore. We bill in six minute increments. Any tool that turns a ninety minute task into nine minutes is going to get used, with or without IT’s blessing. (Sound familiar? Hello again, Shadow IT.)

And Finally, the Model Context Protocol showed up. MCP is the part of this story that doesn’t get enough attention. Imagine if every database, every research platform, every internal wiki spoke a common language to AI agents. That’s MCP. Companies like NetDocuments and Midpage adopted it. Specialized vendors are rolling out MCP servers for everything from patent search to legislative tracking. Once the protocol got standardized, the vendor’s UI stopped being a moat and started being a speed bump.

Now here’s the part that should worry legal information providers. The editorial work that built these companies, the headnotes, the Key Number system, KeyCite, Shepard’s, all of that gets flattened.

In a portal, a KeyCite red flag is loud. It’s red. It’s literally a flag. You see it before you see anything else on the page. In the Shadow UX layer, it’s a token in a JSON field. If the model’s summarization logic doesn’t promote it, the user never sees it. The signal is technically still there. It’s just invisible.

The headnote tree is worse. Editors spent generations nesting these things to show legal relationships. Models hate hierarchies. They flatten them into bullet lists, or worse, into prose. The categorical context disappears.

And then there’s the provenance problem, which is the one that actually worries me. When an agent synthesizes ten cases into one paragraph, the user gets a confident narrative. They don’t see that eight came from KeyCite-validated sources and two came from a sketchy public database the model decided to trust. The vendor’s brand was always the proxy for “this is reliable.” When the brand is invisible, the proxy is gone.

I’ll put it bluntly. If you’re a research vendor, your brand value is currently being laundered through someone else’s chat interface, and you’re not getting credit for it.

The pricing model is the other shoe about to drop.

Seat-based pricing is the deal we’ve all lived with since the 90s. You pay per lawyer. The lawyer logs in. Everybody understands. Now… an AI agent doesn’t log in. It doesn’t have a seat. It can do the work of fifteen associates in an afternoon though. So vendors are watching seat counts flatten while their compute costs spike. The infrastructure bill goes up while the revenue line goes sideways. That’s not a sustainable shape.

The industry is wobbling toward usage-based and outcome-based pricing. Pay per query. Pay per resolved research task. Pay per drafted clause. Salesforce and Zendesk are already doing this in their own categories. The math makes sense for vendors. The problem is that law firms hate metered bills. CIOs cite cost forecasting as the number one headache with consumption pricing. Nobody wants their Westlaw bill to look like an AWS invoice.

Here’s where the real fight is going to happen, and I haven’t seen anybody talk about it openly yet.

Put yourself in the chair of a Westlaw or Lexis sales VP. You’re watching seat utilization drop. Associates are logging in less. Partners barely log in at all. The minutes-per-seat metric you’ve been using internally to justify renewals is collapsing. Meanwhile your compute costs are spiking because the firm’s MCP-connected agents are hammering your APIs and MCPs at three in the morning to draft research memos.

What do you do?

I’ll tell you what you do. You add an AI agent access fee on top of the seat license. Premium tier. “Enterprise agentic access.” Whatever the marketing team lands on. And you keep raising the per-seat price every renewal cycle. Because if each seat is getting cheaper for the firm to actually use, your only path to flat or growing revenue is to charge more for each one. Double dip. Seats plus agents. Stack them.

Now flip the chair. You’re a firm CIO or a law firm library director. Your usage data shows seat logins dropping. Your associates are no longer going directly to Westlaw or Lexis. The vendor calls to renew, the price per seat is up 10%, and now there’s a separate line item for “agent access” that wasn’t on last year’s quote. You ask why you’re paying more for less. The vendor explains, with a straight face, that the value sits in the data, the agent extracts more value per query, and the bill reflects that. You disagree.

That’s the battle.
Continue Reading Shadow UX and the Upcoming Fight over Legal Research

This week, we sit down with two guests from Anthropic, Matt Samuels, Senior Product Counsel, and Den Delimarsky, a core maintainer of the Model Context Protocol, or MCP. Together, they unpack why MCP is drawing so much attention across the legal industry and why some are calling it the USB-C for AI. For law firms long burdened by disconnected systems, scattered data, and the infamous integration tax, MCP offers a shared framework for connecting models to the places where real work and real knowledge live, from iManage and Slack to email, data lakes, and internal tools.

Den explains that the promise of MCP is not tied to one model or one vendor. Instead, it creates a standardized way for AI tools to securely interact with many different systems without forcing organizations to build one-off integrations every time they want to connect a new source. The conversation gets especially relevant for legal listeners when Greg and Marlene press on issues like permissions, ethical walls, least-privilege access, and auditability. The answer from Anthropic is reassuring. MCP is built to work with familiar enterprise security concepts such as OAuth and role-based access, meaning firms do not have to throw out their security model in order to explore new AI workflows.

Matt brings the legal and operational lens, translating MCP into practical use cases for lawyers, legal ops teams, and security leaders. He describes how AI becomes far more useful once it has access to the systems lawyers already rely on every day, while still operating within carefully defined administrative controls. The discussion highlights a key shift in how firms should think about AI. This is no longer about asking a chatbot a clever question and getting a polished paragraph back. With MCP, firms are moving toward systems where AI can retrieve, correlate, summarize, draft, and support actions across multiple platforms, all while staying inside the guardrails set by the organization.

The episode also explores how MCP fits into the rise of agentic workflows, apps, plugins, and skills. Rather than treating AI as a static assistant, Anthropic describes a future where these tools become active participants in legal work, pulling together information from multiple sources, helping assemble case timelines, drafting notes into a shared document, and supporting lawyers in a far more integrated workspace. The conversation around skills is especially useful for firms thinking about standard operating procedures, preferred drafting styles, escalation rules, and repeatable work product. Skills and MCP do different jobs, but together they start to look like the operating system for structured legal workflows.

By the end of the conversation, one message comes through clearly. The legal profession is still early in this shift, but the pace is picking up fast. Both Matt and Den encourage listeners to stop treating these tools like abstract future concepts and start experimenting with them now. At the same time, they offer an important note of caution. As much as these systems promise speed and efficiency, lawyers still need to protect the craft of lawyering, their judgment, and the human choices that matter most. For firms trying to make sense of where AI is headed next, this episode offers a grounded and practical look at the infrastructure layer that could shape what comes next.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | Substack

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠
Transcript

Continue Reading Anthropic’s Matt Samuels and Den Delimarsky – Claude & MCP: Building the USB-C for the Legal Tech Stack

(How to Create a Claude Skill or Plugin for Law and Use It in Claude Cowork)

On February 3, 2026, a single product announcement from Anthropic wiped approximately $285 billion in market capitalization off the stock market in a single trading day. Thomson Reuters dropped 16%. LegalZoom cratered nearly 20%. RELX, the parent company of LexisNexis, fell 14%. Wolters Kluwer lost 13%. The London Stock Exchange Group plunged 8%. The contagion spread to Salesforce, ServiceNow, FactSet, and dozens of other enterprise software companies. Bloomberg called it a “$285 billion rout.” Analysts started calling it the “SaaSpocalypse.”

The catalyst? Anthropic released a set of open-source plugins for its Claude Cowork tool. One of them was a legal plugin that could review contracts, flag risks, triage NDAs, and track compliance. Investors took one look and decided the entire SaaS business model was cooked.

Here’s the part that still gets me: the legal skill at the core of this market panic is, at its heart, a structured markdown file. We’re talking about roughly 250 lines of well-organized pseudo-code, plain text instructions that tell Claude how to think about legal workflows. No compiled binaries. No proprietary algorithms. No infrastructure. Just markdown. And it’s included in the $20/month Claude Pro subscription. That’s less than most attorneys spend on lunch. The full Business Insider breakdown of the stock carnage is here: https://www.businessinsider.com/anthropic-cowork-legal-plugin-publishing-stocks-legalzoom-thomson-reuters-relx-2026-2

So What Are Claude Skills & Plugins, Exactly?

A Claude Skills are a set of instructions, typically written in markdown, that teaches Claude how to approach a specific domain or task in a repeatable, structured way. Think of it as giving Claude a playbook. Instead of prompting from scratch every time you need a contract reviewed or a compliance check run, you write the instructions once, and Claude follows them consistently. Skills include things like domain knowledge (“here’s how our firm handles risk assessment”), step-by-step workflows (“when reviewing an NDA, check these clauses in this order”), and output formatting (“flag issues as green/yellow/red with recommended language”). They’re stored as plain files, organized in folders, and loaded automatically when relevant.  Claude Plugins are a bundle that packages one or more skills together with slash commands, MCP connectors, and sub-agents into a single installable unit, while a skill is just a standalone markdown file that teaches Claude how to handle a specific task or domain.

The important thing to understand is that you don’t have to write these from scratch. You can work with Claude itself to build skills and plugins. Describe what you want the workflow to do, what your firm’s processes look like, what outputs you need, and Claude will help you draft, test, and refine the skill. Anthropic has also open-sourced eleven starter plugins on GitHub (https://github.com/anthropics/knowledge-work-plugins/tree/main/legal) that you can customize. The barrier to entry here is remarkably low. If you can write a clear memo, you can write a skill.

Now Let’s Talk About Claude Cowork

Claude Cowork is Anthropic’s desktop tool that takes Claude out of the chat window and puts it to work on your actual files. You point it at a folder on your computer, give it a task, and it plans, executes, and iterates through multi-step workflows on its own. It can read documents, create new files, delete files, organize folders, and coordinate multiple workstreams. If you’ve used Claude Code (the developer-facing terminal tool), Cowork is the same engine with a friendlier interface. It’s essentially Claude Code for laptop professionals who don’t want to learn code.  It is currently only available in iOS but Claude recently launched the Claude app for PC, so we’re hopeful to see them in that soon.

What makes Cowork interesting for legal professionals is that it doesn’t just respond to one prompt at a time. You set a goal, and Claude works through it like a (very fast, very thorough) junior associate. It asks for clarification when it needs it, saves outputs directly to your file system, and loops you in on its progress. The plugin system takes this further: install a legal plugin, and Cowork immediately knows your firm’s preferred tone, your risk tolerances, your clause library, and your review workflow, what files to reference as template, etc. It’s a configurable work environment; no more cutting and pasting required.

One more thing worth noting, because I think it says something profound about where we are: Cowork was built entirely by Claude Code in approximately 10 days. An AI coding agent built its own non-technical sibling, and that sibling then crashed the legal tech market. 2026 is going to be wild.

Practical Examples: Skills for Lawyers and Legal Academics

Enough theory; let’s build something. Below are four practical examples, two for practicing attorneys and two for legal academics, showing how you could combine Claude Skills and Cowork to create custom workflows. Each example includes the actual markdown you’d use to create the skill.  Markdown is still human-readable, it just gives useful formatting to the text for an LLM.  Basically, any complex prompting I do these days is in markdown.

For Lawyers

  1. Client Intake and Conflict Check Workflow

Continue Reading How to Crash the Legal Tech Market

A fresh Anthropic announcement set off a week of market jitters and existential questions: what happens when the big model shops ship “legal productivity” features and the public markets flinch. This week, we bring Otto von Zastrow back for a rapid-response conversation, with a front-row view from New York and a blunt take: software grows cheaper to reproduce, so value migrates. The discussion lands on a key distinction, interface versus data, and why the old guard still holds leverage even as new entrants sprint.

From there, the conversation zooms in on “systems of record” and the uneasy truth that the safest vault often loses mindshare when a new interface sits on top. Otto points to email, calendar, SharePoint, DMS platforms, and the growing power of a single chat workspace to become the place where work happens. The hosts press on a critical nuance for lawyers: legal research data is not flat, and “good law” demands hierarchy, treatment, and reliable citation context, not a pile of cases plus vibes.

Otto frames Midpage.ai as a data company first, built on continuous court ingestion plus normalization that used to demand armies of editors. He argues AI turns messy inputs into structured repositories at a scale that favors speed and breadth, yet accuracy still requires process design and verification loops. Greg sharpens the point for litigators: the bar is not clever answers, the bar is defensible citations, negative treatment, and confidence that the record matches reality. Otto agrees on the need for trust, then flips the lens: many annotation tasks look like grind work where modern models, paired with strong QA, start to outperform large manual pipelines.

The headline feature is integration via Model Context Protocol, described as a USB-C style connector for tools and models. Midpage chose distribution inside Claude and ChatGPT rather than forcing lawyers into yet another standalone site. Otto explains the wager: lawyers want fewer surfaces, and general chat platforms ship features at a pace no niche vendor matches alone, so the smart move is to meet users where daily work already lives. The demo story centers on research inside chat, with Midpage returning real case links and citations, then letting the user push deeper with uploads and follow-on tasks, while keeping verification one click away.

The back half turns to second-order effects: pricing, agent spend, and the rise of “vibe” work where professionals act more like managers of agent teams than sole authors of first drafts. Marlene raises governance and liability when internal DIY tools pop up outside formal review, and Otto predicts a pendulum toward professionalized deployment plus change management. The conversation closes on Midpage’s “holy grail” topic, citators and the case relationship graph, plus a clear-eyed forecast: standalone research websites shrink as a primary workspace, while research becomes groundwork performed by agents, with lawyers spending more time interrogating results than running searches.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | Substack

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript

Continue Reading Midpage Goes Native: Legal Research Inside Claude and ChatGPT, with Otto von Zastrow

The promise of generative artificial intelligence (AI) in legal practice is seductive: speed up document review, contract drafting, legal research, and thereby shave down hours billed. Yet the reality for many law firms is different. A recent survey by the Association of Corporate Counsel (ACC) and Everlaw found that nearly 60% of in-house counsel reported

This week, we welcome back Tom Martin, CEO of LawDroid, to discuss his widely read “AI Law Professor” column for Thomson Reuters and his five-level roadmap for legal AI. Martin explains that the framework was inspired by a leaked OpenAI memo and aims to give legal professionals a clearer picture of AI’s trajectory. The five levels range from basic chatbots to fully AI-run organizations, with intermediate stages such as reasoners, agents, and innovators. According to Martin, while we are still in the early stages, the release of GPT-5 and its reasoning capabilities has accelerated progress toward higher levels, especially in the development of autonomous agents.

The conversation turns to the implications of GPT-5’s hybrid reasoning model, which combines inference with step-by-step reasoning to deliver more relevant answers. Martin sees this as a significant shift for the legal industry, moving beyond single-response chatbots toward sustained, goal-oriented AI. He predicts that while the technology for fully autonomous legal agents could be available within a year, widespread adoption in law firms and corporations will take closer to three years. However, with these advancements come ethical concerns. Martin outlines four principles for responsible AI agents: transparency, autonomy, reliability, and visibility, cautioning that AI’s knowledge is always bounded and potentially incomplete.

Reflecting on the legal industry’s pace of change since their last discussion, Martin notes that while some firms are sprinting to adopt AI, others may already be too late to catch up. He warns that professional services organizations must actively integrate AI to remain competitive. The discussion explores the potential for tech giants or AI companies to acquire major legal information providers, and Martin argues that the future lies in blending software, consulting, and education into a unified service model. This integrated approach, he believes, will be necessary for survival in a market where AI is capable of generating solutions without traditional software development cycles.

Beyond the legal tech roadmap, Martin shares insights from his teaching at Suffolk University Law School and his observations from producing the “Last Week in Legal AI” news series. He sees both opportunities and risks for the next generation of lawyers, particularly in acting as translators between AI systems and legal practice. The discussion touches on generational attitudes toward AI, with younger users showing both skepticism and heavy reliance on AI for personal and professional support. Martin also addresses societal concerns, from AI in mental health applications to job displacement, and stresses the importance of curating AI outputs with human judgment.

The episode wraps with Martin’s update on the American Legal Technology Awards, set for October 15 at Suffolk University Law School in Boston, which he describes as “the Oscars of legal tech.” When asked about the biggest challenge for the next few years, Martin points to the uncertainty of where professionals will fit in a rapidly shifting world. He envisions a possible new model that combines service, education, and software to deliver legal help at scale, but stresses that no one knows exactly how the future will unfold. His hope is that the AI-driven abundance ahead will be shared broadly, without excluding people from its benefits.

Links:

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

Blue Sky: ⁠@geeklawblog.com⁠ ⁠@marlgeb⁠
⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript

Continue Reading Tom Martin on the Five Levels of Legal AI and What GPT-5 Means for the Future of Law

This week, we welcome longtime friend and legal tech veteran Ken Crutchfield, founder of Spring Forward Consulting. Ken brings his extensive experience from major legal information vendors like Thomson Reuters, Bloomberg, and Wolters Kluwer into a timely and candid discussion about the current phase of artificial intelligence in the legal industry. Comparing today’s generative AI surge to the American Industrial Revolution, Ken describes this moment as the “Wild West” era—full of promise, hype, overinvestment, and, critically, few rules.

Drawing historical parallels to railroads, oil barons, and steel magnates, Ken illustrates how unchecked growth and technological innovation can outpace regulation until market forces or policy catch up. He notes the resurgence of large-scale infrastructure investment, now not in steel or steam, but in compute power and data centers. Just as J.P. Morgan helped stabilize chaotic markets in the 19th century, Ken suggests today’s AI frontier needs a similar recalibration, and possibly new rules of engagement.

The conversation shifts toward the practical realities of legal tech adoption. Ken emphasizes that law firms’ expectations of perfection often collide with startups’ resource limitations. Vendors need to rethink how they engage with firms by building credibility, focusing on integration, and delivering actual use-case wins. Firms, in turn, must move beyond the billable hour mindset and consider new metrics like Return on Experience. Adoption is no longer optional, it’s strategic, competitive, and increasingly client-driven.

Ken also unpacks the looming implications of content rights and data ownership in the age of AI. If firms aren’t investing in data hygiene now, they risk being left behind when more sophisticated AI tools demand clean, structured, and secure datasets. AI isn’t just about automating workflows, it’s about being ready to plug into a future where interoperability, metadata, and permissions will dictate who thrives and who gets leapfrogged.

Finally, Ken calls for scenario planning: not just reacting to what OpenAI or Anthropic might do next, but anticipating it. Firms and vendors alike should double down on what works, define success before launching new projects, and invest in meaningful adoption strategies. In a world moving this fast, it’s no longer about who gets there first, it’s about who gets there with a plan.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

Blue Sky: ⁠@geeklawblog.com⁠ ⁠@marlgeb⁠
⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript

Continue Reading The Wild West of AI: A Legal Tech Reckoning with Ken Crutchfield

In this week’s episode of The Geek in Review, Greg Lambert flies solo while co-host Marlene Gebauer enjoys some well-deserved relaxation. Greg welcomes Trevor Quick, Strategic Business Development lead at Harvey.ai, to discuss one of the most talked-about companies in legal tech today. With a staggering $300 million Series E funding round and a $5 billion valuation, Harvey is rewriting the narrative of what legal AI can be, as well as who it is for. Trevor, a longtime listener turned guest, brings an insider’s view of the company’s evolution and its ambitions for reshaping the legal services ecosystem.

Trevor provides behind-the-scenes insight into how Harvey has become such a magnet for both capital and attention. He attributes the rapid growth to the company’s structure—where legal expertise and AI engineering are in constant collaboration. From founder Winston Weinberg’s legal acumen to co-founder Gabe Pereyra’s technical leadership, Harvey’s DNA has always been rooted in practical use cases for lawyers. The company’s commitment to building with, not just for, the legal community has led to the development of a GTM team composed of practicing attorneys who work directly with law firms and corporate legal departments to customize AI solutions that align with real workflows.

One of the most talked-about moves is Harvey’s deepening partnership with LexisNexis. Trevor explains how integrating Lexis’s data directly into Harvey’s platform removes the friction lawyers face when juggling multiple research tools. With access to Shepard’s citations and native case law lookup, attorneys can now verify and trust the results Harvey generates—turning it from a generative assistant into a full-fledged research companion. This update not only boosts confidence but also meets the rigorous standards legal professionals demand, especially those skeptical of AI’s early stumbles.

The conversation also touches on Harvey’s new functionalities like the Workflow Builder, Vault document review enhancements, and Deep Research Mode. Trevor likens these innovations to “agentic” workflows that let users build custom solutions with low or no code. Whether it’s KM teams building tailored research processes, or attorneys streamlining diligence reviews, Harvey is giving legal professionals the ability to shape how AI works with them, not around them. Trevor emphasizes that Harvey’s success comes from its honesty, adaptability, and the trust it has earned by meeting lawyers where they are—not forcing them to change how they work overnight.

In closing, Greg asks Trevor to gaze into the crystal ball, and while Trevor jokingly admits that predicting the future in AI is a fool’s errand, he offers a vision rooted in intuition, collaboration, and democratized access to justice. From expanding into tax, compliance, and marketing workflows, to becoming a central hub for legal and adjacent industries, Harvey is on a path to not only augment what lawyers do, but to enhance how they feel about the work itself. With world-class partnerships and a relentless pace of innovation, Trevor makes a compelling case that Harvey isn’t just a tool—it might just be the toolbelt.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

Blue Sky: ⁠@geeklawblog.com⁠ ⁠@marlgeb⁠
⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript

Continue Reading Trevor Quick on Harvey.ai as the Utility Belt for Lawyers

In this episode of The Geek in Review, hosts Marlene Gebauer and Greg Lambert sit down with Otto von Zastrow, the founder and CEO of MidPage.AI, an AI-native legal research platform. With a recent $4 million seed round and an ambitious mission to rival legacy research tools, MidPage is drawing attention across the legal industry. Otto shares his unconventional journey from AI-powered lawn robotics to transforming how litigators interact with case law. His pivot into legal tech was fueled by a combination of technical curiosity, the rise of language models, and firsthand insight from his lawyer friends overwhelmed by inefficient research workflows.

Otto walks listeners through the core of MidPage’s offering, which includes the usual suspects—case law, statutes, regulations—but with a twist: smarter search tools, intuitive UI, and features like a proprietary citator and their newly launched Proposition Search. This feature aims to solve the long-standing “needle-in-a-haystack” problem by surfacing judicial language that matches precise arguments, accompanied by contextual metadata and filters. Otto highlights that the goal isn’t just to match or mimic tools like Lexis or Westlaw, but to rethink what legal research should feel like when modern AI capabilities are built in from the ground up.

One of the more unique aspects of MidPage’s product development is their internal “kangaroo court”—a monthly teamwide challenge where employees, regardless of role, must conduct legal research using MidPage or traditional tools. Otto notes that this process not only improves product design but builds real empathy for the user experience. Engineers and designers are encouraged to think like litigators, helping identify pain points and close functionality gaps. As a result, the product continually evolves based on firsthand user scenarios, not just speculation.

The episode also delves into the data-side challenges that have historically prevented innovation in legal research. Otto explains why now—thanks to improved AI models and open access to data—is a rare inflection point for startups. He emphasizes the strategic importance of MidPage building its own case law dataset to avoid being beholden to incumbents. This independence allows them to innovate more freely, enhance precision, and lay the groundwork for broader API access that could empower the next generation of legal tech tools.

Finally, the conversation looks ahead. Otto predicts that AI will amplify the capabilities of individual lawyers, enabling them to process more data at greater depth. In a world where clients are increasingly self-educating with tools like ChatGPT, MidPage aims to provide lawyers with the means to maintain credibility and efficiency while ensuring accuracy. As AI models grow more capable and agentic, Otto sees an evolution not just in how legal research is conducted, but in how lawyers interact with knowledge, data, and ultimately their clients.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

Blue Sky: ⁠@geeklawblog.com⁠ ⁠@marlgeb⁠
⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript

Continue Reading Otto von Zastrow on MidPage.AI and the Future of AI-Powered Legal Research