The Geek in Review closes 2025 with Greg Lambert and Marlene Gebauer welcoming back Sarah Glassmeyer and Niki Black for round two of the annual scorecard, equal parts receipts, reality check, and forward look into 2026. The conversation opens with a heartfelt remembrance of Kim Stein, a beloved KM community builder whose generosity showed up in conference dinners, happy hours, and day to day support across vendors and firms. With Kim’s spirit in mind, the panel steps into the year-end ritual: name the surprises, own the misses, and offer a few grounded bets for what comes next.

Last year’s thesis predicted a shift from novelty to utility, yet 2025 felt closer to a rolling hype loop. Glassmeyer frames generative AI as a multi-purpose knife dropped on every desk at once, which left many teams unsure where to start, even when budgets already committed. Black brings the data lens: general-purpose gen AI use surged among lawyers, especially solos and small firms, while law firm adoption rose fast compared with earlier waves such as cloud computing, which crawled for years before pandemic pressure moved the needle. The group also flags a new social dynamic, status-driven tool chasing, plus a quiet trend toward business-tier ChatGPT, Gemini, and Claude as practical options for many matters when price tags for legal-only platforms sit out of reach for smaller shops.

Hallucinations stay on the agenda, with the panel resisting both extremes: doom posts and fan club hype. Glassmeyer recounts a founder’s quip, “hallucinations are a feature, not a bug,” then pivots to an older lesson from KeyCite and Shepard’s training: verification never goes away, and lawyers always owed diligence, even before LLMs. Black adds a cautionary tale from recent sanctions, where a lawyer ran the same research through a stack of tools, creating a telephone effect and a document nobody fully controlled. Lambert notes a bright spot from the past six months: legal research outputs improved as vendors paired vector retrieval with legal hierarchy data, including court relationships and citation treatment, reducing off-target answers even while perfection stays out of reach.

From there, the conversation turns to mashups across the market. Clio’s acquisition of vLex becomes a headline example, raising questions about platform ecosystems, pricing power, and whether law drifts toward an Apple versus Android split. Black predicts integration work across billing, practice management, and research will matter as much as M&A, with general tech giants looming behind the scenes. Glassmeyer cheers broader access for smaller firms, while still warning about consolidation scars from legal publishing history and the risk of feature decay once startups enter corporate layers. The panel lands on a simple preference: interoperability, standards, and clean APIs beat a future where a handful of owners dictate terms.

On governance, Black rejects surveillance fantasies and argues for damage control, strong training, and safe experimentation spaces, since shadow usage already happens on personal devices. Gebauer pushes for clearer value stories, and the guests agree early ROI shows up first in back office workflows, with longer-run upside tied to pricing models, AFAs, and buyer pushback on inflated hours. For staying oriented amid fractured social channels, the crew trades resources: AI Law Librarians, Legal Tech Week, Carolyn Elefant’s how-to posts, Moonshots, Nate B. Jones, plus Ed Zitron’s newsletter for a wider business lens. The crystal ball segment closes with a shared unease around AI finance, a likely shakeout among thinly funded tools, and a reminder to keep the human network strong as 2026 arrives.

Sarah Glassmeyer

Niki Black

Marlene Gebauer

Greg Lambert

Transcript

Continue Reading Receipts, RAG, and Reboots: Legal Tech’s 2025 Year-End Scorecard with Niki Black and Sarah Glassmeyer

Artificial intelligence has moved fast, but trust has not kept pace. In this episode, Nam Nguyen, co-founder and COO of TruthSystems.ai, joins Greg Lambert and Marlene Gebauer to unpack what it means to build “trust infrastructure” for AI in law. Nguyen’s background is unusually cross-wired—linguistics, computer science, and applied AI research at Stanford Law—giving him a clear view of both the language and logic behind responsible machine reasoning. From his early work in Vietnam to collaborations at Stanford with Dr. Megan Ma, Nguyen has focused on a central question: who ensures that the systems shaping legal work remain safe, compliant, and accountable?

Nguyen explains that TruthSystems emerged from this question as a company focused on operationalizing trust, not theorizing about it. Rather than publishing white papers on AI ethics, his team builds the guardrails law firms need now. Their platform, Charter, acts as a governance layer that can monitor, restrict, and guide AI use across firm environments in real time. Whether a lawyer is drafting in ChatGPT, experimenting with CoCounsel, or testing Copilot, Charter helps firms enforce both client restrictions and internal policies before a breach or misstep occurs. It’s an attempt to turn trust from a static policy on a SharePoint site into a living, automated practice.

A core principle of Nguyen’s work is that AI should be both the subject and the infrastructure of governance. In other words, AI deserves oversight but is also uniquely suited to implement it. Because large language models excel at interpreting text and managing unstructured data, they can help detect compliance or ethical risks as they happen. TruthSystems’ vision is to make governance continuous and adaptive, embedding it directly into lawyers’ daily workflows. The aim is not to slow innovation, but to make it sustainable and auditable.

The conversation also tackles the myth of “hallucination-free” systems. Nguyen is candid about the limitations of retrieval-augmented generation, noting that both retrieval and generation introduce their own failure modes. He argues that most models have been trained to sound confident rather than be accurate, penalizing expressions of uncertainty. TruthSystems takes the opposite approach, favoring smaller, predictable models that reward contradiction-spotting and verification. His critique offers a reminder that speed and safety in AI rarely coexist by accident—they must be engineered together.

Finally, Nguyen discusses TruthSystems’ recent $4 million seed round, led by Gradient Ventures and Lightspeed, which will fund the expansion of their real-time visibility tools and firm partnerships. He envisions a future where firms treat governance not as red tape but as a differentiator, using data on AI use to assure clients and regulators alike. As he puts it, compliance will no longer be the blocker to innovation—it will be the proof of trust at scale.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript:

Continue Reading Trust at Scale: Nam Nguyen on How TruthSystems is Building the Framework for Safe AI in Law