Photo of Marlene Gebauer

This week, we sit down with two guests from Anthropic, Matt Samuels, Senior Product Counsel, and Den Delimarsky, a core maintainer of the Model Context Protocol, or MCP. Together, they unpack why MCP is drawing so much attention across the legal industry and why some are calling it the USB-C for AI. For law firms long burdened by disconnected systems, scattered data, and the infamous integration tax, MCP offers a shared framework for connecting models to the places where real work and real knowledge live, from iManage and Slack to email, data lakes, and internal tools.

Den explains that the promise of MCP is not tied to one model or one vendor. Instead, it creates a standardized way for AI tools to securely interact with many different systems without forcing organizations to build one-off integrations every time they want to connect a new source. The conversation gets especially relevant for legal listeners when Greg and Marlene press on issues like permissions, ethical walls, least-privilege access, and auditability. The answer from Anthropic is reassuring. MCP is built to work with familiar enterprise security concepts such as OAuth and role-based access, meaning firms do not have to throw out their security model in order to explore new AI workflows.

Matt brings the legal and operational lens, translating MCP into practical use cases for lawyers, legal ops teams, and security leaders. He describes how AI becomes far more useful once it has access to the systems lawyers already rely on every day, while still operating within carefully defined administrative controls. The discussion highlights a key shift in how firms should think about AI. This is no longer about asking a chatbot a clever question and getting a polished paragraph back. With MCP, firms are moving toward systems where AI can retrieve, correlate, summarize, draft, and support actions across multiple platforms, all while staying inside the guardrails set by the organization.

The episode also explores how MCP fits into the rise of agentic workflows, apps, plugins, and skills. Rather than treating AI as a static assistant, Anthropic describes a future where these tools become active participants in legal work, pulling together information from multiple sources, helping assemble case timelines, drafting notes into a shared document, and supporting lawyers in a far more integrated workspace. The conversation around skills is especially useful for firms thinking about standard operating procedures, preferred drafting styles, escalation rules, and repeatable work product. Skills and MCP do different jobs, but together they start to look like the operating system for structured legal workflows.

By the end of the conversation, one message comes through clearly. The legal profession is still early in this shift, but the pace is picking up fast. Both Matt and Den encourage listeners to stop treating these tools like abstract future concepts and start experimenting with them now. At the same time, they offer an important note of caution. As much as these systems promise speed and efficiency, lawyers still need to protect the craft of lawyering, their judgment, and the human choices that matter most. For firms trying to make sense of where AI is headed next, this episode offers a grounded and practical look at the infrastructure layer that could shape what comes next.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | Substack

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠
Transcript

Continue Reading Anthropic’s Matt Samuels and Den Delimarsky – Claude & MCP: Building the USB-C for the Legal Tech Stack

Anastasia Boyko joins us this week for a wide-angle conversation about AI adoption, leadership, and the uncomfortable truth behind “we are watching what peer firms do.” A Yale-trained tax lawyer with experience spanning Axiom, legal education, and innovation leadership, Boyko argues that precedent-driven instincts are turning into a liability when the underlying rules of the market are shifting in real time.

The episode opens with lessons from the Women + AI 2.0 Summit at Vanderbilt and the “AI competence penalty” narrative. Boyko’s central principle for law firm leaders is simple, stop copying the competition and start operating with intention. Strategic planning matters more than tool shopping, especially when uncertainty makes leaders freeze, over-index on fear, or chase noise instead of outcomes.

From there, the conversation sharpens into client reality. Boyko shares what she is hearing from in-house leaders, and it is not comforting for firms. Legal departments are working to reduce dependence on outside counsel, business partners inside companies often accept “good enough,” and the models keep improving. The risk is not losing to a peer firm; it is losing the client relationship because the work stops feeling necessary.

A major theme is talent and the apprenticeship gap. Boyko argues firms underinvest in people, even as they spend aggressively on software stacks. AI can help junior lawyers with coaching and confidence, but it does not replace mentorship, judgment-building, or context. The skills that matter now include client advisory, operational thinking, critical judgment, and the ability to solve problems across a complex system, not only perform discrete tasks in a vacuum.

The episode closes on legal education and the future value of the JD. Boyko urges students to be selfish about learning AI, especially when faculty guidance comes from avoidance or philosophy rather than experimentation. Looking ahead, she predicts the JD’s value shifts upward, away from rote production and toward proactive advisory work, relationships, anticipatory counsel, and wisdom-driven judgment. In other words, fewer fire drills, more looking around corners.

Links:

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | Substack

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript:

Continue Reading Anastasia Boyko on Advisor Mode, Training Lawyers for the Post-Pyramid Firm

This week we go “talk show mode” for a special episode where Marlene recaps her trip to the Women + AI 2.0 Summit at Vanderbilt Law, hosted by Cat Moon, and shares why the event felt different from the standard conference grind, more energy, more structure, and yes, a DJ.

The summit’s core focus sits right on a tension point in the wider AI conversation. There’s a persistent narrative that women use AI less than men. Cat Moon’s framing, if it’s true, it’s a problem, and if it’s false, it’s also a problem, sets the tone for a day built around participation and peer connection. The format uses “spark” cards, mini, midi, and maxi prompts, to push attendees into small conversations, deeper reflection, and a final takeaway.

Marlene also highlights sobering research shared during the opening, including an “AI competence penalty” dynamic where identical work is judged differently depending on whether evaluators believe a man or a woman used AI. The discussion lands on why these biases matter inside legal workplaces, and what leaders and peers can do to reduce the social cost of being open about AI usage.

Interspersed throughout are short interviews with attendees and speakers. Nicole Morris (Emory) captures the day’s purpose, expanding AI knowledge, talking risks, and connecting across roles. Sabra Tomb (University of Dayton School of Law) reframes AI as a leadership amplifier, moving from day-to-day management overload toward strategy and vision. Adele Shen (Vanderbilt) offers a funny but sharp taxonomy of AI “experts,” including “technocratic oracles,” “extinction alarmists,” and “touch grass humanists,” which sparks a candid side conversation about self-promotion, authority vibes, and who becomes “the story” in AI discourse.

The episode closes with a look at how education and training can work better. Marlene and Greg lean into peer show-and-tell sessions, leadership modeling, and safe spaces, both governance-safe and learning-safe. A two-person segment from Suffolk Law (Chanal Neves McClain and Dyane O’Leary) adds a teaching twist, integrating AI tools into skills instruction without isolating “AI week” from real lawyering judgment. The final note comes from Stephanie Everett (Lawyerist) on the power of stories, and the reminder that people do not need to internalize the narrative someone else hands them.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | Substack

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript

Continue Reading Women + AI Summit, Real Talk: Leadership, Learning, and Not Letting “The Trap” Write Your Story

The billable hour has survived a lot of threats, from alternative fee arrangements to client procurement, but this episode makes the case that AI changes the pressure level. We open with a blunt assessment, time compresses, clients push back, and the old strategy of “work more to earn more” stops scaling. Enter Stefan Cisla, co-founder and CEO of Ayora, who frames the moment as less a tech problem and more an operating model problem. Law firms still place P&L accountability on individual partners who carry deep legal specialization, then ask them to moonlight as revenue managers. Stefan argues firms are starting to replace that fragile setup with tools that support decision-making across pricing, budgeting, and matter management.

Stefan’s origin story is half high finance, half clinical decision science. He came out of investment banking and professional services transactions, his co-founder Dr. Gordon McKenzie came out of surgery and a PhD path tied to decision science and software. Together they pulled lessons from clinical triage and continuous improvement into the law firm context, focusing on how experts make better decisions under constraints. The hosts tease out the cultural weirdness at the center of the partnership model. Partners often take the long view for client relationships, yet short-term firm economics still take damage through write-offs, scope creep, and messy budgeting. Stefan’s pitch is reconciliation, align client-first instincts with firmer, data-backed pricing and project discipline.

A core anchor for the conversation is the often-quoted $36 billion annual “value gap,” described as preventable revenue leakage tied to write-offs, weak billing practices, bad data, and poor working capital hygiene. Stefan suggests the number matters less than the trend line. AI pushes a new kind of risk, mispricing innovation. If AI reduces billable hours, firms face a squeeze between steep rate increases and client resistance, then end up forced to express value in new ways. The show leans into a spicy idea, the push to change is no longer only client-driven. Stefan sees rising pressure from inside firms, often from the CFO and operations leaders trying to fund AI investment and protect cash flows in a higher-interest-rate environment. Greg sums it up with the line, “the call is coming from inside the house.”

Ayora’s product angle lands on two hard truths, pricing tools in legal have a rough track record, and law firm data quality has been a “25-year overnight problem.” Stefan explains why earlier tools struggled, low urgency when billable hours printed money, ugly underlying time and matter data, and products that were either too complex for occasional users or too simplistic for real-world exceptions. Ayora’s bet is that the data problem is solvable. Their system uses large language models plus proprietary approaches, including work-type ontologies, to extract signal from messy time narratives and matter metadata. The goal is consistent fields and usable categorization across tasks and phases, even when client taxonomies differ. Stefan claims field-level reconstruction and normalization at high accuracy, enough to power a chatbot-style interface that generates a pricing proposal in roughly 90 to 120 seconds by finding precedent matters and adapting them to the new scope.

The conversation closes on the part everyone in a partnership feels in their bones, culture beats software. Moving away from the billable hour is not only a finance shift, it is an identity shift, habit shift, and trust shift. Stefan describes adoption as a joint change strategy, with peers inside the firm as allies, and lots of direct conversations with lawyers to build trust in the recommendations. On the “generational gap” question, he leans toward curiosity over age. Some of their heaviest users have plenty of gray hair, and they tend to be the lawyers who care about how a practice runs. For his personal AI usage, Stefan gives an honest founder answer, meal planning for a two-year-old, automating company chores, and using AI as a sparring partner, with Notion as his favorite tool.His crystal ball point is one law firm leaders should underline twice, gross margin dynamics get messier as tech and LLM costs become part of the delivery mix, and the distance between inputs and outputs grows, driving both consolidation pressure and a new wave of innovation.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | Substack

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Links:
Transcript:

Continue Reading Revenue Leakage, Metadata, and the Post Billable Hour Playbook, Stefan Ciesla of Ayora

A fresh Anthropic announcement set off a week of market jitters and existential questions: what happens when the big model shops ship “legal productivity” features and the public markets flinch. This week, we bring Otto von Zastrow back for a rapid-response conversation, with a front-row view from New York and a blunt take: software grows cheaper to reproduce, so value migrates. The discussion lands on a key distinction, interface versus data, and why the old guard still holds leverage even as new entrants sprint.

From there, the conversation zooms in on “systems of record” and the uneasy truth that the safest vault often loses mindshare when a new interface sits on top. Otto points to email, calendar, SharePoint, DMS platforms, and the growing power of a single chat workspace to become the place where work happens. The hosts press on a critical nuance for lawyers: legal research data is not flat, and “good law” demands hierarchy, treatment, and reliable citation context, not a pile of cases plus vibes.

Otto frames Midpage.ai as a data company first, built on continuous court ingestion plus normalization that used to demand armies of editors. He argues AI turns messy inputs into structured repositories at a scale that favors speed and breadth, yet accuracy still requires process design and verification loops. Greg sharpens the point for litigators: the bar is not clever answers, the bar is defensible citations, negative treatment, and confidence that the record matches reality. Otto agrees on the need for trust, then flips the lens: many annotation tasks look like grind work where modern models, paired with strong QA, start to outperform large manual pipelines.

The headline feature is integration via Model Context Protocol, described as a USB-C style connector for tools and models. Midpage chose distribution inside Claude and ChatGPT rather than forcing lawyers into yet another standalone site. Otto explains the wager: lawyers want fewer surfaces, and general chat platforms ship features at a pace no niche vendor matches alone, so the smart move is to meet users where daily work already lives. The demo story centers on research inside chat, with Midpage returning real case links and citations, then letting the user push deeper with uploads and follow-on tasks, while keeping verification one click away.

The back half turns to second-order effects: pricing, agent spend, and the rise of “vibe” work where professionals act more like managers of agent teams than sole authors of first drafts. Marlene raises governance and liability when internal DIY tools pop up outside formal review, and Otto predicts a pendulum toward professionalized deployment plus change management. The conversation closes on Midpage’s “holy grail” topic, citators and the case relationship graph, plus a clear-eyed forecast: standalone research websites shrink as a primary workspace, while research becomes groundwork performed by agents, with lawyers spending more time interrogating results than running searches.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | Substack

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript

Continue Reading Midpage Goes Native: Legal Research Inside Claude and ChatGPT, with Otto von Zastrow

Ray Brescia joins The Geek in Review this week to unpack a role with peak academia vibes, Associate Dean for Research and Intellectual Life at Albany Law School. Greg frames the title as “Chief Curator of Smart People Ideas,” and Ray embraces a “player-coach” approach, coaching faculty scholarship, unblocking stalled projects, and connecting peers across disciplines. The throughline is community, research momentum, and a practical view of how ideas move from draft to impact.

The conversation then pivots to the core thesis of Ray’s book, Lawyer 3.0. Ray maps the legal profession across three eras: Lawyer 1.0 as a low-barrier “amorphous bar,” Lawyer 2.0 as the institutional buildout of law schools, bar exams, ethics codes, and modern law firms, and Lawyer 3.0 as the next inflection point driven by technology. Ray ties prior shifts to urbanization, immigration, and industrial-scale commerce, then parallels those forces with today’s generative AI and analytics reshaping research, drafting, discovery, and service delivery.

Ray retells the famous milkshake study, then translates the idea into legal services: clients are not shopping for “a lawyer,” clients are shopping for problem resolution. This reframing pushes law firms to examine intake, scoping, and service design through the lens of client outcomes, business problems, and life problems, not internal practice labels. The milkshake becomes a metaphor for product-market fit in law, with fewer crumbs on the steering wheel.

Ray contrasts “bespoke services” with productized pathways, including a Model T style offering that meets most client needs at lower cost, plus higher-cost custom work when risk or complexity demands. Ray highlights expert-system style workflows such as Citizenshipworks, describing a TurboTax-like experience for straightforward matters, with “red flags” triggering referral to a lawyer. The same logic extends to limited scope representation and “lawyer for the day” programs in high-volume courts, where informed consent, reasonable scope, and “first, do no harm” reduce the chance of clients feeling abandoned midstream.

The final stretch tackles law firm AI adoption, hallucination risk, and professional responsibility. Ray stresses minimum competence: verify cases, verify quotations, verify sources, and treat generative outputs as drafts or starting points, not final work product. The panel discusses guardrails, education, and workflow design for large firms, plus the rising reality of clients arriving with AI-generated “research.” Ray’s crystal ball points toward more commoditized legal services at scale, a latent market of underserved people, and stronger interdisciplinary collaboration between lawyers and technologists so legal education aligns with Lawyer 3.0 realities.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | Substack

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript

Continue Reading Lawyer 3.0 and the Milkshake Test: Ray Brescia on Legal AI, Client Value, and the Next Wave of Lawyering

Sateesh Nori joins us on The Geek in Review for an episode that flips the usual legal innovation conversation away from law firm efficiency and toward survival-grade help for people stuck in housing courts and legal aid queues. They open with news from Sateesh himself, he has started a new role with LawDroid, working with Tom Martin, and he frames the mission in plain terms. Legal tech should stop orbiting lawyers and start serving the person with the problem, especially the person who does not even know where to begin.

Sateesh traces his path into law through debate, literature, politics, and a desire to push back on a family tradition of medicine. He describes his work as a long, continuous pursuit of fairness rather than a single turning point, and he admits the early myth that drew many into the profession, the dream of dramatic courtroom advocacy. The conversation quickly lands on the core tension, the legal system sells itself as rule of law and due process, yet ordinary people experience confusion, delay, and closed doors.

From there, Sateesh offers his critique of the current AI gold rush in legal. Too many products promise “faster horses” for lawyers, while the access to justice gap remains untouched because the real bottleneck sits upstream. People need early guidance, clear pathways, and tools that reduce friction before problems metastasize into crises. He argues for technology as “life-preserving tools,” not lawyer toys, and pushes the industry to center tenants, families, and workers navigating high-stakes issues without counsel.

The episode gets concrete with Depositron, a tool Sateesh helped bring to life with LawDroid to help renters recover security deposits through a simple, mobile-friendly workflow. He shares back-of-the-napkin math showing how large the problem is in New York, and why small, focused tools matter at scale. Greg ties the theme to earlier Geek in Review conversations about courts as a service, with the reminder that users experience the justice system like a bureaucracy, not a public utility built for them.

Finally, Sateesh expands the lens to systemic redesign, triage and intake failures, burnout in legal aid, and the hard truth that the current one-on-one model leaves most people unserved. He explores funding ideas ranging from public investment to small-fee consumer tools that sustain themselves, and he sketches future-facing concepts like AI-assisted dispute resolution to provide faster closure. In the crystal ball segment, he predicts a reckoning for the legal market as AI reshapes client expectations, with major implications for law students, staffing models, and the profession’s sense of purpose.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | Substack

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Links
Transcript

Continue Reading Sateesh Nori Joins LawDroid: AI Tools for Access to Justice, Housing Court, and Legal Aid

Quinten Steenhuis brings a builder’s mindset to legal innovation, rooted in early Indymedia activism where scavenged hardware became community infrastructure. That scrappy origin story carries through a dozen years of eviction defense at Greater Boston Legal Services, with a steady focus on tools that help people solve problems without waiting for a savior in a suit. Along the way, Quinten also lived the unglamorous side of mission tech, keeping systems funded, supported, and usable when budgets get tight and priorities get loud.

The conversation then jumps to Suffolk Law’s approach to generative AI education, including a required learning track for first-year students. Quinten frames the track as foundational training, then points to a deeper bench of follow-on courses and the LIT Lab clinic where students build with real tools, real partners, and real stakes. The throughline stays consistent, exposure alone solves nothing, so Suffolk puts reps, projects, and practice behind the syllabus.

A standout segment tackles the “vaporware semester” problem, where student-built prototypes fade out once finals end. The LIT Lab fights that decay by narrowing tool choices, standardizing around DocAssemble, and supervising work with a clinic-style model, staff stay close, quality stays high, and maintenance stays owned. Projects ship through CourtFormsOnline, with ongoing updates, volunteer support, and a commitment to keep public-facing legal help online for the long haul.

Then the episode turns toward agentic workflows, with examples from Quinten’s consulting work in Virginia and Oregon. One project uses voice-based intake to screen for eligibility, confirm location and income, gather the story in a person’s own words, and route matters into usable categories. Another project speeds bar referral by replacing slow human triage with faster classification and better user interaction patterns, fewer walls of typing, more guided choices, more yes-or-no steps, and fewer dead ends.

In the closing stretch, Quinten shares the sources feeding his learning loop, LinkedIn, Legal Services Corporation’s Innovations conference, the LSNTAP mailing list, podcasts, and Bob Ambrogi’s LawSites, plus the occasional spicy Reddit detour. The crystal ball lands on a thorny challenge for both academia and practice, training lawyers for judgment and verification when AI outputs land near-correct most of the time, then fail in the exact moment nobody expects. Quinten’s bottom line feels blunt and optimistic at once, safe workflows matter, and the public already uses general chat tools for legal help, so the legal system needs harm-reducing alternatives that work.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Links (as shared by Quinten):
Transcript

Continue Reading From Legal Aid to LIT Lab: Quinten Steenhuis and the Builder’s Approach to AI

Cat Moon and Mark Williams return to The Geek in Review wearing two hats, plus one tiara. The conversation starts at Vanderbilt’s inaugural AI Governance Symposium, where “governance” means wildly different things depending on who shows up. Judges, policy folks, technologists, in-house leaders, and law firm teams all brought separate definitions, then bumped into each other during generous hallway breaks. Those collisions led to new research threads and fresh coursework, which feels like the real product of a symposium, beyond any single panel.

One surprise thread moved from wonky sidebar to dinner-table topic fast, AI’s energy appetite and the rise of data centers as a local political wedge issue. Mark describes needing to justify the topic months earlier, then watching the news cycle catch up until no justification was needed. Greg connects the dots to Texas, where energy access, on-site generation, and data-center buildouts keep lawyers busy. The point lands, AI governance lives upstream from prompts and policies, down in grids, zoning fights, and infrastructure decisions.

From there, the episode pivots to training, law students, and the messy transition from “don’t touch AI” to “your platforms already baked AI into the buttons.” Mark shares how students now return from summer programs having seen tools like Harvey, even if firms still look like teams building the plane during takeoff. Cat frames the real need as basic, course-by-course guidance so students gain confidence instead of fear. Greg adds a perfect artifact from the academic arms race, Exam Blue Book sales jumping because handwritten exams keep AI out of finals, while AI still helps study through tools like NotebookLM quiz generation.

Governance talk gets practical fast, procurement, contract language, standards, and the sneaky problem of feature drift inside approved tools. Mark flags how smaller firms face a brutal constraint problem, limited budget, limited time, one shot to pick from hundreds of products, and no dedicated procurement bench. ISO 42001 shows up as a shorthand signal for vendor maturity, though standards still lag behind modern generative systems. Marlene brings the day-to-day friction, outside counsel guidelines, client consent, and repeated approvals slow adoption even after a tool passes internal reviews. Greg nails the operational pain, vendors ship new capabilities weekly, sometimes pushing teams from “closed universe” to “open internet” without much warning.

The closing crystal ball lands on collaboration and humility. Cat argues for a future shaped by co-creation across firms, schools, and students, not a demand-and-defend standoff about “practice-ready” graduates. Mark zooms out to the broader shift in the knowledge-work apprenticeship model, fewer beginner reps, earlier specialization pressure, and new ownership models knocking on the door in places like Tennessee. Along the way, Cat previews Women + AI Summit 2.0, with co-created content, travel stipends for speakers, workshops built around take-home artifacts, plus a short story fiction challenge to write women into the future narrative, tiara energy optional but encouraged.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

LINKS:

CAT MOON
Vanderbilt AI Law Lab
VAILL Substack
Women + AI Summit
Practising Law Institute (PLI)
American Arbitration Association (AAA)
Legal Technology Hub

MARK WILLIAMS
Hotshot Legal
The Information
Understanding AI (Timothy B. Lee)
One Useful Thing (Ethan Mollick)
SemiAnalysis
ISO/IEC 42001 standard overview
EU AI Act (Regulation (EU) 2024/1689)
Chip War (Chris Miller, publisher page)

Transcript

Continue Reading Tiara Time and Data Center Politics: Vanderbilt’s AI Governance Playbook with Cat Moon and Mark Williams

Judge Scott Schlegel of the Louisiana Fifth Circuit Court of Appeal joins The Geek in Review for a candid, funny, and unflinchingly practical conversation about AI inside the judicial system. Schlegel wears multiple hats, appellate judge, former prosecutor, reform-minded builder, plus a podcaster and Substack writer who speaks plainly about what works and what fails when technology hits real people on real timelines. The throughline stays consistent, courts do not need more hype, courts need competence, guardrails, and a process mindset.

Judge Schlegel tackles the messy reality of AI disclosures, certifications, and uneven court rules across jurisdictions. His core message lands fast, judicial authority lives with the judge, not an AI system. From there, he outlines why chambers guidance matters, along with a structured, step-by-step approach for responsible drafting support, including prompt discipline and workflow thinking. The goal stays simple, faster decisions without surrendering judgment to “bot overlords.”

The discussion then shifts to constraints judges live with every day, budgets, procurement rules, security anxiety, and the gap between shiny vendor demos and courthouse reality. Schlegel argues for a scrappy, process-first approach using small pilots, one chambers, one workflow, one measurable result. He compares the moment to early “cloud” adoption lessons, pay for the right security, avoid free tools where the user becomes the product, and treat sensitive records with strict care. Courts will see broader adoption as enterprise-grade options become attainable and baked into trusted platforms.

Then comes the part that lingers in your head after the episode ends, deepfakes and voice cloning as a near-term threat to due process, especially in domestic violence and protective order contexts. Schlegel explains why judges tend to err on the side of safety, and why “damage done” shows up long before expert testimony arrives. His practical recommendation focuses on pretrial practice, require disclosure, surface manipulation concerns early, and reduce surprises at trial. He even shares a simple family safety habit, a private “secret word” to confirm identity during urgent calls, since voice cloning tools lower the barrier for fraud.

Finally, Schlegel offers a sharp warning about confirmation bias, large language models often aim to please the user, which benefits advocates and harms neutral decision-making. His answer: an “AI alignment test” mindset, deliberate prompting, and refusal to outsource the white-page moment to a model. For the future, he points toward structural change courts rarely receive funding for, true legal technologists who redesign case management and public-facing guidance at scale. If courts stop printing emails and living in wire baskets, progress follows, and yes, somewhere in a parallel universe, Schlegel still wants a hologram machine.

Links

Judge Schlegel, his court, and his work

AI-in-courts guidance, plus his newsletter

Deepfakes, provenance, and content credentials

  • C2PA, Coalition for Content Provenance and Authenticity, “About” page. C2PA

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript

Continue Reading Bot Overlords, Deepfakes, and the Weight of the Robe: Judge Scott Schlegel on AI in the Courts