Photo of Greg Lambert

Librarian-Lawyer-Knowledge Management-Competitive Analysis-Computer Programmer.... I've taken the Renaissance Man approach to working in the legal industry and have found it very rewarding. My Modus Operandi is to look at unrelated items and create a process that can tie those items together. The overall goal is to make the resulting information better than the individual parts that make it up.

The latest episode of The Geek in Review finds Greg Lambert and Marlene Gebauer back from Dallas with a sharp, grounded recap of the Texas Trailblazers conference, an event that stayed close to the daily realities of legal work instead of drifting into glossy predictions. Their conversation centers on a legal industry trying to sort out what AI means right now, in billing, workflow, training, pricing, governance, and client expectations. What stands out most is the hosts’ focus on the practical tension between what the tools are capable of and what law firms and legal departments are structurally ready to absorb.

A major thread in the discussion is the risk of what one speaker called “cognitive surrender,” the habit of trusting AI output too quickly and handing off too much human judgment in the process. Greg and Marlene treat this as less of a software issue and more of a workflow and education issue. The point is not whether AI produces polished work. The point is whether organizations are building systems where review, judgment, and accountability still sit with people. Their conversation ties this concern to legal practice, education, and even K-12 learning, showing how widespread the temptation has become to accept fluent output without enough friction or scrutiny.

The episode also takes a hard look at the pressure AI is putting on the billable hour. Marlene frames the issue well when she notes that AI does not kill the billable hour so much as expose its weaknesses. Across the conference, the hosts heard repeated concern about the mismatch between efficiency gains and the financial structures law firms still rely on. If AI reduces the time needed for many tasks, then firms, associates, pricing teams, and clients all have new incentives to sort through. Greg and Marlene highlight the awkward moment the industry is in, where firms want to talk about value while clients are also eyeing the chance to pay less for faster work. The result is a growing need for honest conversations about pricing, outcomes, and what legal value should mean when time is no longer the cleanest measure.

What gives the episode its energy is the number of concrete examples pulled from the conference. The hosts discuss lower-cost multi-state surveys, large-scale analysis of rights-of-way documents, and internal workflow improvements built with existing tools like SharePoint and Copilot on little or no budget. These stories show AI not as abstract promise, but as a way to get work done that used to be too expensive, too tedious, or too slow to tackle at all. At the same time, Greg and Marlene stay skeptical in the right places, especially when the conversation turns to legal research, citation accuracy, and the idea that technology vendors have somehow solved problems that law librarians and researchers know are stubbornly difficult.

By the end of the episode, the biggest takeaway is not that the legal industry has a clear answer, but that waiting for certainty is no longer a serious option. Greg and Marlene come away from Texas Trailblazers with a sense that real progress is happening through testing, discussion, and repeated adjustment, not through perfect plans. Their recap captures an industry in transition, one where law firms, legal ops teams, vendors, and clients are all feeling the strain between old business models and new technical possibilities. The message is simple and urgent: start the conversations now, use the tools now, and get honest about what must change before the gap between what is possible and what is workable gets even wider.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | Substack

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠
Transcript:

Continue Reading Texas Trailblazers and the Hard Truth About AI in Legal Work

(See Day Two Coverage for the In-House Programs over on The Geek in Review Substack page – GL)

Day One of Texas Trailblazers in Dallas had a different tone than most legal tech conferences I attend. The conversations stayed close to the work. Less speculation, more discussion about what people are doing right now, where it is working, and where it is breaking.

Organized by Cosmonauts in partnership with LegalOps.com and held at The Statler Dallas, the Private Practice Day brought together law firm leaders, legal operations professionals, and technology vendors for a full day of keynotes, panels, and product demos. Joy Heath Rush, CEO of ILTA, chaired the day and opened with a fireside on the new era of conversations between general counsel and outside counsel. Her framing set the tone: AI is living at the intersection of business and technology, and it is creating conversations that simply did not exist two years ago.

Across the sessions and hallway conversations that followed, a few themes kept showing up. They were consistent whether the speaker came from a law firm, an in-house team, or a vendor building the tools.

Trust in AI is becoming a workflow problem

Rowan McNamee, Co-Founder and COO of Mary Technology, opened the sponsor keynote with a point that came up repeatedly throughout the day. AI outputs are persuasive. They read well. They look complete. That creates a tendency to move forward without enough friction in the process.

McNamee cited a recent study on what researchers call “cognitive surrender,” based on the work of Sean Hay and building on the framework in Thinking, Fast and Slow. Under time pressure, people rely on AI even when they know they should verify the result. In a series of experiments, override rates improved from 20% to 42% when participants had real money on the line. Even then, more than half still followed the AI’s answer. “You can reduce it, but you can’t eliminate it just by telling people to be careful,” McNamee noted.

In a legal setting, the implication is straightforward. Review cannot be optional. It has to be built into the process in a way that does not depend on someone remembering to slow down.

That concern echoed later in the day during the panel I moderated. Laura Ewing-Pearle, Senior Manager of eDiscovery and Practice Support Technology at Baker Botts, described a clear split even among e-discovery practitioners who are enthusiastic about AI. They lean heavily toward document interrogation and querying. They are far less comfortable with AI making responsiveness calls, and they are “definitely not comfortable with AI making privilege calls.” The line between assistance and reliance is still being worked out in real time.

Adding urgency to the conversation, panelists referenced a recent New York case in which a court ruled that AI-generated legal advice obtained through a public tool was not protected by attorney-client privilege. The ruling was specific to a consumer-facing chatbot, but the message landed clearly: enterprise AI tools with proper security and licensing are becoming a matter of professional risk management.

Adoption follows incentives, not access

The panel on Culture, Change, and Collaboration brought together Emma Dowden of Burges Salmon, Thom Wisinski of Haynes Boone, Kelley Lugo of Bryan Cave Leighton Paisner, Rowan McNamee, and Tammy Covert of Nachawati. The discussion stayed focused on what actually drives adoption inside organizations.Continue Reading What I Took Away from Texas Trailblazers

This week we welcome Paula Reichenberg, founder of Neuron, for a sharp and thoughtful conversation about legal translation, artificial intelligence, and what happens when professional expertise collides with tools that look polished but still miss the mark. Paula shares her path from M&A and capital markets law into business school, legal services, machine learning, and finally legal tech entrepreneurship. What started as frustration with inefficiencies inside law firms grew into a translation business, then evolved again as machine translation improved and forced a harder question about survival, adaptation, and quality.

Paula explains how her early company, Hieronymus, found success by handling sensitive, high-stakes legal translations in Switzerland, especially where precision and confidentiality mattered most. But as machine translation improved, the market for average work started to disappear. Clients began doing more on their own, leaving only the hardest, highest-value assignments for specialists. Rather than ignore the shift, Paula leaned into it. That decision led her back to university, into data science and machine learning, and toward building Neuron, a company focused less on replacing expertise and more on improving the process around imperfect AI output.

A central theme of the discussion is the uncomfortable truth that many users do not care as much about excellence as professionals do. Paula makes the point with refreshing honesty. AI often produces work that is mediocre, but for a large share of users, mediocre is enough. That creates both a market shift and a professional dilemma. In legal translation, as in legal drafting more broadly, the issue is rarely whether AI produces something flawless. The issue is whether the user notices what is wrong, has the time to fix it, and has the systems in place to improve the result efficiently. Paula argues that the real value is not in claiming perfection. It is in helping experts find the mistakes faster, correct them with less pain, and avoid wasting hours doing work that feels like cleanup on aisle five.

The conversation also digs into trust, user behavior, and the strange authority people give to AI-generated answers. Paula recounts how, in one negotiation, a party trusted ChatGPT’s answer more than a human tax lawyer’s detailed explanation, even when the AI response was wrong. That anecdote opens up a broader discussion about confidence, presentation, and why polished outputs often feel more persuasive than expert judgment. Greg and Marlene connect that idea to legal systems, translation quality, and access to justice, especially where technology might offer better service than overworked and underfunded human systems. The result is not a simple pro-AI or anti-AI position. It is a grounded look at where human excellence still matters, where automation fills gaps, and where the future may split between mass-market convenience and premium, highly tailored expertise.

Looking ahead, Paula sees consolidation coming to legal tech, along with a growing push toward seamless interfaces that bring best-in-class features into one place. For Neuron, that means becoming an embedded layer inside other legal tools rather than forcing lawyers to juggle yet another standalone platform. Her crystal ball view is both stylish and sobering. She compares the future of legal services to retail and fashion, with more ready-to-wear solutions for everyday needs and a smaller, more exclusive market for bespoke legal work. It is a vivid way to frame what may be coming. The legal industry is not simply moving toward automation. It is sorting itself into tiers of service, quality, and expectation. And if Paula is right, the future belongs to those who understand where “good enough” ends and where true expertise still earns its premium.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | Substack

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠
Transcript

Continue Reading From Translation to Transformation: Paula Reichenberg on AI, Legal Quality, and the Future of Good Enough

This week, we sit down with two guests from Anthropic, Matt Samuels, Senior Product Counsel, and Den Delimarsky, a core maintainer of the Model Context Protocol, or MCP. Together, they unpack why MCP is drawing so much attention across the legal industry and why some are calling it the USB-C for AI. For law firms long burdened by disconnected systems, scattered data, and the infamous integration tax, MCP offers a shared framework for connecting models to the places where real work and real knowledge live, from iManage and Slack to email, data lakes, and internal tools.

Den explains that the promise of MCP is not tied to one model or one vendor. Instead, it creates a standardized way for AI tools to securely interact with many different systems without forcing organizations to build one-off integrations every time they want to connect a new source. The conversation gets especially relevant for legal listeners when Greg and Marlene press on issues like permissions, ethical walls, least-privilege access, and auditability. The answer from Anthropic is reassuring. MCP is built to work with familiar enterprise security concepts such as OAuth and role-based access, meaning firms do not have to throw out their security model in order to explore new AI workflows.

Matt brings the legal and operational lens, translating MCP into practical use cases for lawyers, legal ops teams, and security leaders. He describes how AI becomes far more useful once it has access to the systems lawyers already rely on every day, while still operating within carefully defined administrative controls. The discussion highlights a key shift in how firms should think about AI. This is no longer about asking a chatbot a clever question and getting a polished paragraph back. With MCP, firms are moving toward systems where AI can retrieve, correlate, summarize, draft, and support actions across multiple platforms, all while staying inside the guardrails set by the organization.

The episode also explores how MCP fits into the rise of agentic workflows, apps, plugins, and skills. Rather than treating AI as a static assistant, Anthropic describes a future where these tools become active participants in legal work, pulling together information from multiple sources, helping assemble case timelines, drafting notes into a shared document, and supporting lawyers in a far more integrated workspace. The conversation around skills is especially useful for firms thinking about standard operating procedures, preferred drafting styles, escalation rules, and repeatable work product. Skills and MCP do different jobs, but together they start to look like the operating system for structured legal workflows.

By the end of the conversation, one message comes through clearly. The legal profession is still early in this shift, but the pace is picking up fast. Both Matt and Den encourage listeners to stop treating these tools like abstract future concepts and start experimenting with them now. At the same time, they offer an important note of caution. As much as these systems promise speed and efficiency, lawyers still need to protect the craft of lawyering, their judgment, and the human choices that matter most. For firms trying to make sense of where AI is headed next, this episode offers a grounded and practical look at the infrastructure layer that could shape what comes next.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | Substack

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠
Transcript

Continue Reading Anthropic’s Matt Samuels and Den Delimarsky – Claude & MCP: Building the USB-C for the Legal Tech Stack

This week we welcome back Niki Black to unpack the findings from the newly released 2026 Legal Industry Report from 8am The conversation centers on a legal profession moving into a new phase of AI adoption, where individual lawyers are embracing general purpose AI tools at a striking pace, while many firms still lack even basic policies or training. Niki explains that this disconnect is especially visible among solo, small, and mid-sized firms, where limited resources often slow formal governance even as day-to-day use rises fast.

A major theme of the discussion is the widening gap between personal experimentation and institutional readiness. Niki notes that lawyers are not waiting for permission, and many are already relying on AI to support research, drafting, and routine work. At the same time, firms are struggling to provide guidance, training, and guardrails. The episode highlights the growing risk of shadow AI in legal practice, especially when lawyers and staff turn to unsanctioned tools to keep pace with client demands. For smaller firms, the answer is not elaborate bureaucracy, but practical direction, clear expectations, and a recognition that even a modest policy is better than none.

The conversation also turns to client expectations and the economic pressure AI is placing on the traditional law firm model. Greg and Marlene press Niki on whether firms are truly ready to move away from the billable hour as AI compresses the time needed to complete legal work. Niki argues that large firms face deep structural obstacles because compensation systems, staffing models, and internal economics remain tied to hourly billing. Still, she sees pressure building from in-house counsel, boutique competitors, and smaller firms that use technology to deliver comparable work at lower cost. The result is a market that may resist change, but not escape it.

Another standout part of the episode explores how AI is reshaping access to justice. Niki points to the promise of generative AI as a force multiplier for legal aid lawyers and public defenders, especially when paired with trusted tools and better funding. She rejects the idea that technology alone will solve the justice gap, but makes a strong case that AI, combined with stronger institutional support, helps lawyers serve more people with better results. At the same time, the hosts and Niki acknowledge the risks of a two-tiered system, where wealthier clients benefit from high quality tools while vulnerable users face lower quality, error-prone outputs.

By the end of the episode, the conversation expands from AI tools to a broader structural shift across firms, clients, and law schools. Niki sees the next three to five years as a period of deep change, where pricing, training, competition, and professional expectations all evolve at once. She also shares her own methods for keeping up, including RSS feeds, trusted blogs, and LinkedIn, with a few playful complaints about Substack making life more complicated. The episode leaves listeners with a clear message: the biggest issue is no longer whether AI will affect legal practice. It already is. The real question is whether the profession can adapt fast enough to manage the consequences wisely.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | Substack

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript:

Continue Reading Niki Black on AI Adoption, Billing Pressure, and the Governance Gap in Legal

Anastasia Boyko joins us this week for a wide-angle conversation about AI adoption, leadership, and the uncomfortable truth behind “we are watching what peer firms do.” A Yale-trained tax lawyer with experience spanning Axiom, legal education, and innovation leadership, Boyko argues that precedent-driven instincts are turning into a liability when the underlying rules of the market are shifting in real time.

The episode opens with lessons from the Women + AI 2.0 Summit at Vanderbilt and the “AI competence penalty” narrative. Boyko’s central principle for law firm leaders is simple, stop copying the competition and start operating with intention. Strategic planning matters more than tool shopping, especially when uncertainty makes leaders freeze, over-index on fear, or chase noise instead of outcomes.

From there, the conversation sharpens into client reality. Boyko shares what she is hearing from in-house leaders, and it is not comforting for firms. Legal departments are working to reduce dependence on outside counsel, business partners inside companies often accept “good enough,” and the models keep improving. The risk is not losing to a peer firm; it is losing the client relationship because the work stops feeling necessary.

A major theme is talent and the apprenticeship gap. Boyko argues firms underinvest in people, even as they spend aggressively on software stacks. AI can help junior lawyers with coaching and confidence, but it does not replace mentorship, judgment-building, or context. The skills that matter now include client advisory, operational thinking, critical judgment, and the ability to solve problems across a complex system, not only perform discrete tasks in a vacuum.

The episode closes on legal education and the future value of the JD. Boyko urges students to be selfish about learning AI, especially when faculty guidance comes from avoidance or philosophy rather than experimentation. Looking ahead, she predicts the JD’s value shifts upward, away from rote production and toward proactive advisory work, relationships, anticipatory counsel, and wisdom-driven judgment. In other words, fewer fire drills, more looking around corners.

Links:

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | Substack

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript:

Continue Reading Anastasia Boyko on Advisor Mode, Training Lawyers for the Post-Pyramid Firm

This week we go “talk show mode” for a special episode where Marlene recaps her trip to the Women + AI 2.0 Summit at Vanderbilt Law, hosted by Cat Moon, and shares why the event felt different from the standard conference grind, more energy, more structure, and yes, a DJ.

The summit’s core focus sits right on a tension point in the wider AI conversation. There’s a persistent narrative that women use AI less than men. Cat Moon’s framing, if it’s true, it’s a problem, and if it’s false, it’s also a problem, sets the tone for a day built around participation and peer connection. The format uses “spark” cards, mini, midi, and maxi prompts, to push attendees into small conversations, deeper reflection, and a final takeaway.

Marlene also highlights sobering research shared during the opening, including an “AI competence penalty” dynamic where identical work is judged differently depending on whether evaluators believe a man or a woman used AI. The discussion lands on why these biases matter inside legal workplaces, and what leaders and peers can do to reduce the social cost of being open about AI usage.

Interspersed throughout are short interviews with attendees and speakers. Nicole Morris (Emory) captures the day’s purpose, expanding AI knowledge, talking risks, and connecting across roles. Sabra Tomb (University of Dayton School of Law) reframes AI as a leadership amplifier, moving from day-to-day management overload toward strategy and vision. Adele Shen (Vanderbilt) offers a funny but sharp taxonomy of AI “experts,” including “technocratic oracles,” “extinction alarmists,” and “touch grass humanists,” which sparks a candid side conversation about self-promotion, authority vibes, and who becomes “the story” in AI discourse.

The episode closes with a look at how education and training can work better. Marlene and Greg lean into peer show-and-tell sessions, leadership modeling, and safe spaces, both governance-safe and learning-safe. A two-person segment from Suffolk Law (Chanal Neves McClain and Dyane O’Leary) adds a teaching twist, integrating AI tools into skills instruction without isolating “AI week” from real lawyering judgment. The final note comes from Stephanie Everett (Lawyerist) on the power of stories, and the reminder that people do not need to internalize the narrative someone else hands them.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | Substack

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript

Continue Reading Women + AI Summit, Real Talk: Leadership, Learning, and Not Letting “The Trap” Write Your Story

The billable hour has survived a lot of threats, from alternative fee arrangements to client procurement, but this episode makes the case that AI changes the pressure level. We open with a blunt assessment, time compresses, clients push back, and the old strategy of “work more to earn more” stops scaling. Enter Stefan Cisla, co-founder and CEO of Ayora, who frames the moment as less a tech problem and more an operating model problem. Law firms still place P&L accountability on individual partners who carry deep legal specialization, then ask them to moonlight as revenue managers. Stefan argues firms are starting to replace that fragile setup with tools that support decision-making across pricing, budgeting, and matter management.

Stefan’s origin story is half high finance, half clinical decision science. He came out of investment banking and professional services transactions, his co-founder Dr. Gordon McKenzie came out of surgery and a PhD path tied to decision science and software. Together they pulled lessons from clinical triage and continuous improvement into the law firm context, focusing on how experts make better decisions under constraints. The hosts tease out the cultural weirdness at the center of the partnership model. Partners often take the long view for client relationships, yet short-term firm economics still take damage through write-offs, scope creep, and messy budgeting. Stefan’s pitch is reconciliation, align client-first instincts with firmer, data-backed pricing and project discipline.

A core anchor for the conversation is the often-quoted $36 billion annual “value gap,” described as preventable revenue leakage tied to write-offs, weak billing practices, bad data, and poor working capital hygiene. Stefan suggests the number matters less than the trend line. AI pushes a new kind of risk, mispricing innovation. If AI reduces billable hours, firms face a squeeze between steep rate increases and client resistance, then end up forced to express value in new ways. The show leans into a spicy idea, the push to change is no longer only client-driven. Stefan sees rising pressure from inside firms, often from the CFO and operations leaders trying to fund AI investment and protect cash flows in a higher-interest-rate environment. Greg sums it up with the line, “the call is coming from inside the house.”

Ayora’s product angle lands on two hard truths, pricing tools in legal have a rough track record, and law firm data quality has been a “25-year overnight problem.” Stefan explains why earlier tools struggled, low urgency when billable hours printed money, ugly underlying time and matter data, and products that were either too complex for occasional users or too simplistic for real-world exceptions. Ayora’s bet is that the data problem is solvable. Their system uses large language models plus proprietary approaches, including work-type ontologies, to extract signal from messy time narratives and matter metadata. The goal is consistent fields and usable categorization across tasks and phases, even when client taxonomies differ. Stefan claims field-level reconstruction and normalization at high accuracy, enough to power a chatbot-style interface that generates a pricing proposal in roughly 90 to 120 seconds by finding precedent matters and adapting them to the new scope.

The conversation closes on the part everyone in a partnership feels in their bones, culture beats software. Moving away from the billable hour is not only a finance shift, it is an identity shift, habit shift, and trust shift. Stefan describes adoption as a joint change strategy, with peers inside the firm as allies, and lots of direct conversations with lawyers to build trust in the recommendations. On the “generational gap” question, he leans toward curiosity over age. Some of their heaviest users have plenty of gray hair, and they tend to be the lawyers who care about how a practice runs. For his personal AI usage, Stefan gives an honest founder answer, meal planning for a two-year-old, automating company chores, and using AI as a sparring partner, with Notion as his favorite tool.His crystal ball point is one law firm leaders should underline twice, gross margin dynamics get messier as tech and LLM costs become part of the delivery mix, and the distance between inputs and outputs grows, driving both consolidation pressure and a new wave of innovation.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | Substack

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Links:
Transcript:

Continue Reading Revenue Leakage, Metadata, and the Post Billable Hour Playbook, Stefan Ciesla of Ayora

A fresh Anthropic announcement set off a week of market jitters and existential questions: what happens when the big model shops ship “legal productivity” features and the public markets flinch. This week, we bring Otto von Zastrow back for a rapid-response conversation, with a front-row view from New York and a blunt take: software grows cheaper to reproduce, so value migrates. The discussion lands on a key distinction, interface versus data, and why the old guard still holds leverage even as new entrants sprint.

From there, the conversation zooms in on “systems of record” and the uneasy truth that the safest vault often loses mindshare when a new interface sits on top. Otto points to email, calendar, SharePoint, DMS platforms, and the growing power of a single chat workspace to become the place where work happens. The hosts press on a critical nuance for lawyers: legal research data is not flat, and “good law” demands hierarchy, treatment, and reliable citation context, not a pile of cases plus vibes.

Otto frames Midpage.ai as a data company first, built on continuous court ingestion plus normalization that used to demand armies of editors. He argues AI turns messy inputs into structured repositories at a scale that favors speed and breadth, yet accuracy still requires process design and verification loops. Greg sharpens the point for litigators: the bar is not clever answers, the bar is defensible citations, negative treatment, and confidence that the record matches reality. Otto agrees on the need for trust, then flips the lens: many annotation tasks look like grind work where modern models, paired with strong QA, start to outperform large manual pipelines.

The headline feature is integration via Model Context Protocol, described as a USB-C style connector for tools and models. Midpage chose distribution inside Claude and ChatGPT rather than forcing lawyers into yet another standalone site. Otto explains the wager: lawyers want fewer surfaces, and general chat platforms ship features at a pace no niche vendor matches alone, so the smart move is to meet users where daily work already lives. The demo story centers on research inside chat, with Midpage returning real case links and citations, then letting the user push deeper with uploads and follow-on tasks, while keeping verification one click away.

The back half turns to second-order effects: pricing, agent spend, and the rise of “vibe” work where professionals act more like managers of agent teams than sole authors of first drafts. Marlene raises governance and liability when internal DIY tools pop up outside formal review, and Otto predicts a pendulum toward professionalized deployment plus change management. The conversation closes on Midpage’s “holy grail” topic, citators and the case relationship graph, plus a clear-eyed forecast: standalone research websites shrink as a primary workspace, while research becomes groundwork performed by agents, with lawyers spending more time interrogating results than running searches.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | Substack

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript

Continue Reading Midpage Goes Native: Legal Research Inside Claude and ChatGPT, with Otto von Zastrow

Ray Brescia joins The Geek in Review this week to unpack a role with peak academia vibes, Associate Dean for Research and Intellectual Life at Albany Law School. Greg frames the title as “Chief Curator of Smart People Ideas,” and Ray embraces a “player-coach” approach, coaching faculty scholarship, unblocking stalled projects, and connecting peers across disciplines. The throughline is community, research momentum, and a practical view of how ideas move from draft to impact.

The conversation then pivots to the core thesis of Ray’s book, Lawyer 3.0. Ray maps the legal profession across three eras: Lawyer 1.0 as a low-barrier “amorphous bar,” Lawyer 2.0 as the institutional buildout of law schools, bar exams, ethics codes, and modern law firms, and Lawyer 3.0 as the next inflection point driven by technology. Ray ties prior shifts to urbanization, immigration, and industrial-scale commerce, then parallels those forces with today’s generative AI and analytics reshaping research, drafting, discovery, and service delivery.

Ray retells the famous milkshake study, then translates the idea into legal services: clients are not shopping for “a lawyer,” clients are shopping for problem resolution. This reframing pushes law firms to examine intake, scoping, and service design through the lens of client outcomes, business problems, and life problems, not internal practice labels. The milkshake becomes a metaphor for product-market fit in law, with fewer crumbs on the steering wheel.

Ray contrasts “bespoke services” with productized pathways, including a Model T style offering that meets most client needs at lower cost, plus higher-cost custom work when risk or complexity demands. Ray highlights expert-system style workflows such as Citizenshipworks, describing a TurboTax-like experience for straightforward matters, with “red flags” triggering referral to a lawyer. The same logic extends to limited scope representation and “lawyer for the day” programs in high-volume courts, where informed consent, reasonable scope, and “first, do no harm” reduce the chance of clients feeling abandoned midstream.

The final stretch tackles law firm AI adoption, hallucination risk, and professional responsibility. Ray stresses minimum competence: verify cases, verify quotations, verify sources, and treat generative outputs as drafts or starting points, not final work product. The panel discusses guardrails, education, and workflow design for large firms, plus the rising reality of clients arriving with AI-generated “research.” Ray’s crystal ball points toward more commoditized legal services at scale, a latent market of underserved people, and stronger interdisciplinary collaboration between lawyers and technologists so legal education aligns with Lawyer 3.0 realities.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | Substack

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript

Continue Reading Lawyer 3.0 and the Milkshake Test: Ray Brescia on Legal AI, Client Value, and the Next Wave of Lawyering