This week we welcome back Niki Black to unpack the findings from the newly released 2026 Legal Industry Report from 8am The conversation centers on a legal profession moving into a new phase of AI adoption, where individual lawyers are embracing general purpose AI tools at a striking pace, while many firms still lack even basic policies or training. Niki explains that this disconnect is especially visible among solo, small, and mid-sized firms, where limited resources often slow formal governance even as day-to-day use rises fast.

A major theme of the discussion is the widening gap between personal experimentation and institutional readiness. Niki notes that lawyers are not waiting for permission, and many are already relying on AI to support research, drafting, and routine work. At the same time, firms are struggling to provide guidance, training, and guardrails. The episode highlights the growing risk of shadow AI in legal practice, especially when lawyers and staff turn to unsanctioned tools to keep pace with client demands. For smaller firms, the answer is not elaborate bureaucracy, but practical direction, clear expectations, and a recognition that even a modest policy is better than none.

The conversation also turns to client expectations and the economic pressure AI is placing on the traditional law firm model. Greg and Marlene press Niki on whether firms are truly ready to move away from the billable hour as AI compresses the time needed to complete legal work. Niki argues that large firms face deep structural obstacles because compensation systems, staffing models, and internal economics remain tied to hourly billing. Still, she sees pressure building from in-house counsel, boutique competitors, and smaller firms that use technology to deliver comparable work at lower cost. The result is a market that may resist change, but not escape it.

Another standout part of the episode explores how AI is reshaping access to justice. Niki points to the promise of generative AI as a force multiplier for legal aid lawyers and public defenders, especially when paired with trusted tools and better funding. She rejects the idea that technology alone will solve the justice gap, but makes a strong case that AI, combined with stronger institutional support, helps lawyers serve more people with better results. At the same time, the hosts and Niki acknowledge the risks of a two-tiered system, where wealthier clients benefit from high quality tools while vulnerable users face lower quality, error-prone outputs.

By the end of the episode, the conversation expands from AI tools to a broader structural shift across firms, clients, and law schools. Niki sees the next three to five years as a period of deep change, where pricing, training, competition, and professional expectations all evolve at once. She also shares her own methods for keeping up, including RSS feeds, trusted blogs, and LinkedIn, with a few playful complaints about Substack making life more complicated. The episode leaves listeners with a clear message: the biggest issue is no longer whether AI will affect legal practice. It already is. The real question is whether the profession can adapt fast enough to manage the consequences wisely.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | Substack

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript:

Continue Reading Niki Black on AI Adoption, Billing Pressure, and the Governance Gap in Legal

This week we go “talk show mode” for a special episode where Marlene recaps her trip to the Women + AI 2.0 Summit at Vanderbilt Law, hosted by Cat Moon, and shares why the event felt different from the standard conference grind, more energy, more structure, and yes, a DJ.

The summit’s core focus sits right on a tension point in the wider AI conversation. There’s a persistent narrative that women use AI less than men. Cat Moon’s framing, if it’s true, it’s a problem, and if it’s false, it’s also a problem, sets the tone for a day built around participation and peer connection. The format uses “spark” cards, mini, midi, and maxi prompts, to push attendees into small conversations, deeper reflection, and a final takeaway.

Marlene also highlights sobering research shared during the opening, including an “AI competence penalty” dynamic where identical work is judged differently depending on whether evaluators believe a man or a woman used AI. The discussion lands on why these biases matter inside legal workplaces, and what leaders and peers can do to reduce the social cost of being open about AI usage.

Interspersed throughout are short interviews with attendees and speakers. Nicole Morris (Emory) captures the day’s purpose, expanding AI knowledge, talking risks, and connecting across roles. Sabra Tomb (University of Dayton School of Law) reframes AI as a leadership amplifier, moving from day-to-day management overload toward strategy and vision. Adele Shen (Vanderbilt) offers a funny but sharp taxonomy of AI “experts,” including “technocratic oracles,” “extinction alarmists,” and “touch grass humanists,” which sparks a candid side conversation about self-promotion, authority vibes, and who becomes “the story” in AI discourse.

The episode closes with a look at how education and training can work better. Marlene and Greg lean into peer show-and-tell sessions, leadership modeling, and safe spaces, both governance-safe and learning-safe. A two-person segment from Suffolk Law (Chanal Neves McClain and Dyane O’Leary) adds a teaching twist, integrating AI tools into skills instruction without isolating “AI week” from real lawyering judgment. The final note comes from Stephanie Everett (Lawyerist) on the power of stories, and the reminder that people do not need to internalize the narrative someone else hands them.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | Substack

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript

Continue Reading Women + AI Summit, Real Talk: Leadership, Learning, and Not Letting “The Trap” Write Your Story

Artificial intelligence has moved fast, but trust has not kept pace. In this episode, Nam Nguyen, co-founder and COO of TruthSystems.ai, joins Greg Lambert and Marlene Gebauer to unpack what it means to build “trust infrastructure” for AI in law. Nguyen’s background is unusually cross-wired—linguistics, computer science, and applied AI research at Stanford Law—giving him a clear view of both the language and logic behind responsible machine reasoning. From his early work in Vietnam to collaborations at Stanford with Dr. Megan Ma, Nguyen has focused on a central question: who ensures that the systems shaping legal work remain safe, compliant, and accountable?

Nguyen explains that TruthSystems emerged from this question as a company focused on operationalizing trust, not theorizing about it. Rather than publishing white papers on AI ethics, his team builds the guardrails law firms need now. Their platform, Charter, acts as a governance layer that can monitor, restrict, and guide AI use across firm environments in real time. Whether a lawyer is drafting in ChatGPT, experimenting with CoCounsel, or testing Copilot, Charter helps firms enforce both client restrictions and internal policies before a breach or misstep occurs. It’s an attempt to turn trust from a static policy on a SharePoint site into a living, automated practice.

A core principle of Nguyen’s work is that AI should be both the subject and the infrastructure of governance. In other words, AI deserves oversight but is also uniquely suited to implement it. Because large language models excel at interpreting text and managing unstructured data, they can help detect compliance or ethical risks as they happen. TruthSystems’ vision is to make governance continuous and adaptive, embedding it directly into lawyers’ daily workflows. The aim is not to slow innovation, but to make it sustainable and auditable.

The conversation also tackles the myth of “hallucination-free” systems. Nguyen is candid about the limitations of retrieval-augmented generation, noting that both retrieval and generation introduce their own failure modes. He argues that most models have been trained to sound confident rather than be accurate, penalizing expressions of uncertainty. TruthSystems takes the opposite approach, favoring smaller, predictable models that reward contradiction-spotting and verification. His critique offers a reminder that speed and safety in AI rarely coexist by accident—they must be engineered together.

Finally, Nguyen discusses TruthSystems’ recent $4 million seed round, led by Gradient Ventures and Lightspeed, which will fund the expansion of their real-time visibility tools and firm partnerships. He envisions a future where firms treat governance not as red tape but as a differentiator, using data on AI use to assure clients and regulators alike. As he puts it, compliance will no longer be the blocker to innovation—it will be the proof of trust at scale.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript:

Continue Reading Trust at Scale: Nam Nguyen on How TruthSystems is Building the Framework for Safe AI in Law

This week we are joined by Brandon Wiebe, General Counsel and Head of Privacy at Transcend. Brandon discusses the company’s mission to develop privacy and AI solutions. He outlines the evolution from manual governance to technical solutions integrated into data systems. Transcend saw a need for more technical privacy and AI governance as data processing advanced across organizations.

Wiebe provides examples of AI governance challenges, such as engineering teams using GitHub Copilot and sales/marketing teams using tools like Jasper. He created a lightweight AI Code of Conduct at Transcend to give guidance on responsible AI adoption. He believes technical enforcement like cataloging AI systems will also be key.

On ESG governance changes, Wiebe sees parallels to privacy regulation evolving from voluntary principles to specific technical requirements. He expects AI governance will follow a similar path but much faster, requiring legal teams to become technical experts. Engaging early and lightweight in development is key.

Transcend’s new Pathfinder tool provides observability into AI systems to enable governance. It acts as an intermediary layer between internal tools and foundation models like OpenAI. Pathfinder aims to provide oversight and auditability into these AI systems.

Looking ahead, Wiebe believes GCs must develop deep expertise in AI technology, either themselves or by building internal teams. Understanding the technology will allow counsels to provide practical and discrete advice as adoption accelerates. Technical literacy will be critical.

Listen on mobile platforms:  ⁠Apple Podcasts⁠ |  ⁠Spotify⁠ | YouTube (NEW!)

Contact Us:

Twitter: ⁠⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠⁠, or ⁠⁠⁠⁠⁠@glambert⁠⁠⁠⁠⁠
Threads: @glambertpod or @gebauerm66
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠

Transcript

Continue Reading Observing the Black Box: Transcend’s Brandon Wiebe’s Insights into Governing Emerging AI Systems (TGIR Ep. 218)