This week, we are joined by Joshua Broyde, PhD and Principal Solutions Architect at AI21 Labs. Broyde discusses AI21 Labs’ work in developing foundation models and AI systems for enterprise use, with a focus on their latest model, Jamba-Instruct.

Josh explains the concept of foundation models and how they differ from traditional AI models. He highlights AI21 Labs’ work with financial institutions on use cases like term sheet generation and financial document Q&A. The conversation explores the challenges and benefits of training models on company-specific data versus using retrieval augmented generation (RAG) techniques.

The interview delves into the development of Jamba Instruct, a hybrid model combining Mamba and Transformer architectures to achieve both speed and accuracy. Broyde discusses the model’s performance, industry reaction, and potential applications.

Safety and security considerations for AI models are addressed, with Broyde explaining AI21 Labs’ approach to implementing guardrails and secure deployment options for regulated industries. The discussion also covers the balance between model quality and cost, and the trend towards matching specific models to appropriate tasks.

Josh also shares his thoughts on future developments in the field, including the potential for agent-based approaches and increased focus on cost optimization in AI workflows.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠

Contact Us: 

Twitter: ⁠⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠⁠, or ⁠⁠⁠⁠⁠@glambert⁠⁠⁠⁠⁠
Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠


Continue Reading Can AI Bring Both Speed and Accuracy: Josh Broyde of AI21 Labs (TGIR Ep. 261)

Starting in January of 2023, I started a project that compiles summaries for legal tech and innovation articles and creates a daily automated podcast. The AI Lawyer Talking Tech podcast has been a lot of fun, and gets a few hundred listens each week. That’s not too bad for something that I really created as my own personal professional development tool. I’ve been thinking over the past couple of weeks about what else I could be doing with the data that I’ve collected. One idea that I’m going to test out is to review some of the topic issues of the week and write about them here at 3 Geeks. The first thing I did was make the logo. After working with AI to create a picture, then telling it: “make me fatter”, “a little fatter”, “whoa, not that fat,” I then started thinking of what kind of stories should be highlighted in a weekly review. I’m going to start with some of the big merger/acquisitions or funding rounds with the legal technology vendors, or new feature releases. We’ll see how that goes, and then adjust as needed.

  • Legaltech Hub Acquires Legal Tech Consultants
    Legaltech Hub has acquired Legal Tech Consultants, aiming to broaden their service offerings and enhance their market presence. This strategic move allows Legaltech Hub to integrate a wealth of expertise and innovative solutions from Legal Tech Consultants, providing a comprehensive suite of services to their clients. The acquisition marks a significant step in consolidating resources and strengthening their position in the legal technology sector​.
  • Hebbia Confirms $130m Investment from Andreessen Horowitz and Google Ventures
    Hebbia, a fintech and legal tech startup, secured $130 million in Series B funding, led by Andreessen Horowitz and Google Ventures. Their flagship product, Matrix, enables users to compare and analyze complex documents simultaneously, leveraging multi-modal and multi-model capabilities. This investment underscores Hebbia’s significant growth and its potential to revolutionize document analysis for asset managers, law firms, and Fortune 100 companies​.
  • Wordsmith Raises $5 Million to Empower Lawyers with AI
    AI-powered legal assistant platform Wordsmith has raised $5 million to further develop its technology, which aims to enhance lawyers’ capabilities by automating routine tasks. This funding will help Wordsmith integrate with existing legal systems and adhere to stringent security standards, allowing lawyers to focus on high-value work. The platform promises to speed up legal processes significantly, highlighting the growing importance of AI in professional legal services​.
  • BRYTER Launches AI Agents with Major Brand Backing
    BRYTER has launched AI Agents, supported by major brands, aiming to enhance legal workflows with advanced AI capabilities. This new product offers innovative solutions for automating complex legal processes, reducing the time and effort required for various tasks. The launch demonstrates BRYTER’s commitment to advancing legal technology and providing cutting-edge tools to law firms and legal departments​.
  • Leya and FromCounsel Launch Knowledge Sharing Pilot with 12 Law Firms
    Leya and FromCounsel have partnered to launch a knowledge-sharing pilot program involving 12 law firms. This initiative focuses on leveraging generative AI to enhance legal knowledge management and collaboration. By integrating AI-driven tools, the program aims to streamline knowledge sharing and improve the efficiency and effectiveness of legal research and document drafting within participating firms​.
  • Introducing the Paxton AI Citator: Setting New Benchmarks in Legal Research
    Paxton AI has introduced “Citator,” a patent-pending tool that utilizes AI to check the status and significance of case law. Unlike traditional citators, this AI-powered system analyzes related cases and semantic similarities, achieving a 94% accuracy rate in Stanford Casehold Benchmark testing. This innovative tool aims to provide a more reliable and predictive research tool for legal professionals, marking a significant advancement in legal research technology​.
  • Litify Announces Responsible AI Development Approach with AWS and Anthropic
    Litify has announced its collaboration with AWS and Anthropic to develop a responsible AI approach for legal technology. This partnership focuses on integrating ethical AI practices into Litify’s platform, ensuring compliance with legal standards and enhancing the functionality of AI in legal operations. This initiative highlights the growing importance of responsible AI development in the legal tech industry​.
  • Midpage: The New GenAI Legal Research Startup
    Midpage has launched as a new generative AI legal research startup, aiming to revolutionize how legal research is conducted. By utilizing advanced AI algorithms, Midpage provides more efficient and accurate legal research capabilities. This startup is positioned to significantly impact the legal tech landscape by offering innovative solutions for legal professionals seeking to enhance their research processes​.
  • Spellbook Launches Contract ‘Benchmarks’ to Show What’s Market
    Spellbook has launched a new feature called “Contract Benchmarks,” which uses AI to provide market-standard comparisons for legal contracts. This tool helps legal professionals understand prevailing market terms and conditions, improving contract negotiation and drafting. Spellbook’s latest offering represents a significant advancement in contract management technology, leveraging AI to enhance legal practice​.
  • Osprey Approach Partners with UX Specialists Full Clarity
    Osprey Approach, a legal software provider, has partnered with UX specialists Full Clarity to enhance the user experience of their platform. This collaboration aims to make legal work more intuitive and user-friendly by incorporating client feedback and conducting UX research workshops. The partnership reflects Osprey Approach’s commitment to continuous improvement and innovation in legal software​.

In this episode of The Geek in Review, hosts Marlene Gebauer and Greg Lambert sit down for a one-on-one conversation to catch up on their recent vacations and discuss some of the latest developments in the legal industry. Marlene shares her experience in Hawaii, where she enjoyed beautiful beaches, a nature preserve, and delicious local cuisine with her family. Greg, on the other hand, talks about his trip to South Africa, where he spent time in Kruger National Park observing wildlife and learning about the challenges of rhino poaching.

The conversation then shifts to the recent lawsuits filed by The New York Times, the Center for Investigative Reporting, and Mother Jones against OpenAI and Microsoft for using their copyrighted material to train AI systems. The hosts discuss the implications of these lawsuits and draw parallels to the music industry’s past struggles with Napster and the eventual rise of streaming services.

Marlene introduces a new AI-powered comic maker she discovered, which allows users to generate comic strips based on their own images and descriptions. Despite some humorous mishaps with her own generated character, she sees potential in the tool for creating engaging content. Greg shares his experience with Hedra, an AI tool that animates still pictures to create talking head videos, and the two discuss the possibility of creating a fully AI-generated podcast episode.

The hosts also explore practical applications of AI, such as AI Excel Bot, which generates Excel formulas based on plain text instructions and explains existing formulas in simple terms. They discuss how this tool could be beneficial for professionals who frequently work with complex spreadsheets.

Lastly, Greg highlights an episode of the Technically Legal podcast featuring Brandon Epstein, Chief Forensic Officer at Medex, who discusses the challenges of detecting deep fakes and the digital fingerprints left by various recording devices. The conversation emphasizes the importance of authenticating videos, especially in the news media, and the ongoing battle between deep fake creators and forensic experts.

Listen on mobile platforms:  
⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠
 |  ⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠


Contact Us: 

Twitter: ⁠⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠⁠, or ⁠⁠⁠⁠⁠@glambert⁠⁠⁠⁠⁠


Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠


Continue Reading Catching Up on Tech and Travels – TGIR Ep. 260

[Note: Please welcome Laurent Wiesel, Principal at Justly Consulting. This article was originally published in LinkedIn. – GL]

Harvey AI bills its platform as providing a suite of products tailored to lawyers and law firms across all practice areas and workflows.

Harvey’s debut product video released on June 28, clocking in at 1:44, plays to a background of Mozart’s Piano Sonata No. 11 and no further audio. While light on explanation, the video introduces several features along with some interesting bells and whistles.

Harvey’s features, presented here in the order they appear in the left nav:

1. Client-Matter Number Integration

  • Critical feature for law firm operations
  • Helps enforce client policies and avoid data blending
  • Similar to established platforms like Lexis and Westlaw
  • Potential benefits for billing and ethical wall enforcement
  • Modal alert indicates ability to attach client-matter numbers to individual queries
Assistant feature with prompt loading and “save example”

2. Assistant

  • Offers chat and document Q&A capabilities
  • Currently provides single responses, no interactive chat yet
  • Prompt limit: 100,000 characters, which appears to reduce to 4,000 when even a single documents is added
  • Feature to save and load prompts, separated into Private, Team (collaboration), and Harvey (pre-built) varieties
  • Unexplained “Save Example” feature (briefly visible at the 0:18 mark)
  • Citations in assistant and research responses for quick verification

Continue Reading Let’s Breakdown Harvey.AI’s Video of Features (Guest Post)

Since Greg Lambert is on vacation, we wanted to share an episode of Future Ready Business podcast, which Greg also produces. Art Cavazos and Courtney White from Jackson Walker, LLP, interview Neil Chilson, Head of AI Policy at the Abundance Institute, and Travis Wussow, regulatory and governmental affairs lawyer Partner at JW. Neil and Travis had worked together at the Charles Koch Institute and are both heavily involved in advising governmental agencies and policy makers on the topic of AI.

Neil Chilson and Travis Wussow both emphasize the complexity of regulating AI due to its broad applications and the difficulty in defining it. They argue that most AI applications fall into areas that already have existing regulatory frameworks, such as healthcare, intellectual property, and transportation. Chilson suggests that policymakers should focus on identifying specific harms and addressing gaps in current regulations rather than creating entirely new frameworks for AI.

Regarding current AI policy, Wussow notes that litigation is already underway, particularly in areas like copyright infringement. He believes that proactive policymaking will likely wait until these legal disputes are resolved. Chilson highlights that there is significant activity at the federal level, with the White House issuing a comprehensive executive order on AI, and at the state level, with numerous AI-related bills being proposed.

On the topic of AI’s potential impact on elections and misinformation, Chilson expresses less concern about AI-generated content itself and more about the distribution networks that spread misinformation. He emphasizes the importance of maintaining trust in the electoral system and suggests that tracking and analyzing actual instances of AI use in elections is crucial for understanding its real impact.

Looking to the future, both experts stress the importance of the United States maintaining its leadership in AI development. They argue that this leadership is essential for embedding American values into AI systems and preventing other countries, such as China, from dominating the field with potentially restrictive approaches. Chilson also highlights the potential for AI to revolutionize healthcare, emphasizing the need to adapt regulatory frameworks, particularly in areas like FDA approval processes, to allow for the benefits of AI-driven personalized medicine while ensuring safety and efficacy.


Listen to TGIR on mobile platforms:  

⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠

Contact Us: 

X: ⁠⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠⁠, or ⁠⁠⁠⁠⁠@glambert


Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠

Continue Reading The Current and Future State of AI Policies with Neil Chilson and Travis Wussow

This week, we have a lively discussion with June Liebert and Cornell Winston, President and President-Elect, respectively, for the American Association of Law Libraries (AALL). The conversation centers around the upcoming AALL annual conference, scheduled for July 20-23, 2024, at the Hyatt Regency in Chicago. 

June Liebert, Director of Information Services at O’Melveny & Myers LLP, kicks off the discussion by diving into the conference theme. She emphasizes the importance of librarians taking proactive leadership roles, particularly in the context of the rapidly evolving landscape influenced by Generative AI. June highlights the concept of “innovation intermediaries,” individuals who not only generate innovative ideas but also ensure these ideas are implemented effectively. This theme resonates with the need for transformative thinking, urging librarians to embrace significant changes rather than settling for incremental improvements.

This year’s keynote speaker is Cory Doctorow, a renowned sci-fi author and advocate for digital rights, Doctorow’s presence promises to bring a unique perspective on the intersection of technology and societal impact. June shares her enthusiasm for Doctorow, whose work with the Electronic Frontier Foundation (EFF) and writings on “enshittification” – the degradation of online platforms over time – provide critical insights into the ethical implications of technological advancements. Doctorow’s focus on the human impact of technology, rather than just the technology itself, offers valuable reflections for the legal information profession.

Cornell Winston, law librarian at the United States Attorney’s Office, provides a comprehensive overview of what attendees can expect from the conference. With over 60 educational programs, including a pre-conference workshop on AI strategy, the event promises rich learning opportunities. Cornell underscores the value of networking and connecting with peers, highlighting the inclusive environment fostered by the Host Program for first-time attendees. His advice to explore sessions outside one’s usual domain and to meet new people each day encapsulates the spirit of professional growth and community building.

As the conversation unfolds, the trio touches on the broader theme of innovation and technology within law libraries. June and Cornell discuss the shift from physical books to digital resources, reflecting on how generative AI and other technologies are reshaping the profession. June mentions the implementation of live closed captioning for sessions, a first for the conference, enhancing accessibility and providing real-time transcripts for attendees.

June shares her experiences as the first Asian American president of the association, highlighting her efforts to promote diversity, equity, and inclusion. Cornell, looking ahead to his presidency, discusses plans to review AALL’s governance structure and explore the future of law libraries in an increasingly digital world. The episode wraps up with a preview of the 2024 conference in Portland, Oregon, promising another enriching experience for the legal information community.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠

Contact Us: 
Twitter: ⁠⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠⁠, or ⁠⁠⁠⁠⁠@glambert⁠⁠⁠⁠⁠
Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠


Continue Reading Leading, Innovating, and Transforming: Insights for the 2024 AALL Annual Conference – June Liebert and Cornell Winston

If you’re my age or older, you most likely remember the first rudimentary spreadsheet application you used.  You may not remember the actual moment you discovered the sort functionality, but you probably remember the feeling you got when you hit that button for the first time. In one magical moment, a jumbled list of items – randomly entered in the nonsensical order they popped into your brain – was immediately transformed into a perfectly alphabetized list worthy of your municipal library’s card catalog.  Amazing!  Revolutionary! Life changing?!

Today we live in the world of Generative AI.  It is amazing, revolutionary, and probably life changing. This technology has already changed, and will continue to change, how we use and interact with computers in business, and in our personal lives.  I use it daily.  I use it in solutions for my clients.  It takes tedious tasks of text transformation and turns them into simple push button experiences.  Just like the sort function does.

An 50 year old male professional with curly hair.jpgOf course, GenAI does a lot more than order or sort text.  It translates.  It rewrites.  It summarizes. It clarifies.  It extracts. It interpolates. It expounds upon. It combines THIS and THAT into one thing.  It re-imagines THIS, as if it were actually THAT. It recontextualizes THIS as if THAT didn’t exist. Its capabilities go well beyond simply sorting a list.  In fact, it often gets simple sorting wrong, because it doesn’t use a hard-coded algorithm to produce it’s text transformation. Instead, GenAI uses vast troves of written example language to determine the probability of the next word it should write, and then the next, and the next… until it reaches it’s maximum output or it runs out of space.

The results of this seemingly simple exercise are impressive.  It means that any writer with a rough idea of what they want to write, no longer needs to stare at a blank page (or screen), they simply ask a question or propose a concept and the technology generates a draft.  Any reader who doesn’t want to read a 400 page transcript of a court proceeding, can get a summary that guides them directly to the “important” parts of the text.  And any person who needs to transform any text from one form, or language, or perspective, to another can get a draft version of that transformation very easily and quickly.

A stupid robot sit in front of a computer making m.jpg

However, there is a downside to Generative AI.  It so easily manipulates and transforms text, that it often gives the appearance of being intelligent.  This is an illusion.  We tend to identify people who easily manipulate and transform text as intelligent people, which gives rise to the fallacy that a machine that does the same is an intelligent machine. We talk about GenAI “passing the bar exam”, as if that means it understands the law, and then we further extrapolate that we can use GenAI to replace lawyers.  As tempting as that proposition might be to those of us who are not lawyers but work with them regularly, it’s not going to happen any time soon, if ever.

Continue Reading GenAI & the Magical Sort Button

In this episode of The Geek in Review podcast, hosts Greg Lambert and Kate Boyd from Sente Advisors (standing in for Marlene Gebauer) sit down with Giles Thompson, Head of Growth, and Jun Choi, Growth Executive at Avvoka, to discuss the company’s innovative approach to document automation and the impact of generative AI on the legal industry.

Avvoka is a no-code document automation platform that enables legal professionals to streamline the creation and management of complex legal documents and contracts. The company has recently introduced AI-enhanced features such as SmartAutomation (with GenAI) and SmartConsolidation , which aim to simplify the process of building automations.

Giles and Jun highlight the differences in knowledge management practices between the US and UK, with the former being more technology-focused and the latter being more human-centric. Avvoka’s platform caters to both law firms and in-house legal teams, with clients ranging from Warner Brothers Discovery and McDonald’s to

The company also hosts a vendor-agnostic community event series called “Logically Drafted,” which bring together legal professionals interested in document automation to share their experiences and insights. These events have gained traction globally, with upcoming sessions planned for Houston (Tuesday 18 June) and Chicago (Friday 21 June), and other cities.

Looking ahead, Avvoka is focusing on integrating generative AI technologies into its platform while ensuring data security and client control. The company is collaborating with clients to provide flexibility in terms of hosting and integrating large language models, allowing them to maintain control over their data and manage risks associated with these emerging technologies.

Giles and Jun emphasize the importance of being realistic about the capabilities and limitations of generative AI in the legal industry. They believe that document automation will continue to play a crucial role, with AI serving as an enhancement rather than a replacement for existing tools and processes. The key challenge for vendors like Avvoka will be to navigate the hype surrounding generative AI while delivering practical, value-driven solutions to their clients.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠


Contact Us: 
Twitter: ⁠⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠⁠, or ⁠⁠⁠⁠⁠@glambert⁠⁠⁠⁠⁠
Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠


Continue Reading Avvoka’s Innovative Approach to Document Automation and the Impact of Generative AI – Giles Thompson and Jun Choi

Reflections on the Stanford HAI Report on Legal AI Research Tools

The updated paper from the Stanford HAI team presents rigorous and insightful research into the performance of generative AI tools for legal research. The complexity of legal research is well-acknowledged, and this study looks into the capabilities and limitations of AI-driven legal research tools like Lexis+ AI and Westlaw AI-Assisted Research.

Stanford and Unclean Hands

This is Stanford Human-Centered Artificial Intelligence (HAI) team’s second bite at this apple. The first version of this report was criticized by me and others for using Thomson Reuters’ Practical Law AI tool as a legal research tool instead of using Westlaw Precision AI (which they call Westlaw AI-AR in their report.) In the follow up report, and in the blog post and on LinkedIn posts, authors of the report made it sound like Thomson Reuters had some nefarious reason for this. In reality, TR had not released the AI tool to any academic institutions, not just the Stanford group.

In the original report, this should have been posted up front and center on the paper, instead it was finally mentioned on page 9. It appears to me that Stanford HAI released the initial report in an attempt to pressure TR to give them access, which did work. However, this type of maneuver from Stanford HAI may have repercussions on whether the legal industry gives them a second chance to take this report seriously. We expect a high degree of ethics from prestigious instructions like Stanford, and in my opinion, this team of researchers did not meet those expectations in the initial report.

Challenges in Legal Research

Legal research is inherently challenging due to the intricate and nuanced nature of legal language and the vast array of case law, statutes, and regulations that must be navigated. The Stanford researchers designed their study with prompts that often included false statements or were worded in ways that tested the AI tools’ ability to understand legal complexities. This approach, while insightful, introduces a certain bias aimed at identifying the tools’ vulnerabilities. I would not say that they cherry-picked the results for the paper, but that the questions posed were designed to challenge the systems they were testing. This is not a bad thing, just something to consider when reading the report.

AI Tools: Strengths and Weaknesses

One of the primary advantages and disadvantages of legal research generative AI tools is their attempt to provide direct answers to user prompts. Traditional legal research tools like Westlaw and Lexis previously operated by presenting users with a list of relevant cases, statutes, and regulations based on Boolean or natural language searches. The shift to AI-generated responses aims to streamline this process but comes with its own set of challenges.

For instance, Westlaw’s Precision AI (Westlaw AI-AR) often produces lengthy answers, which can be a double-edged sword. While these detailed responses can be informative, they also increase the risk of including inaccuracies or “hallucinations.” The study highlights that these AI tools sometimes produce errors, raising questions about the effectiveness of their initial vector search compared to traditional Boolean and natural language searches. The researchers gave the results a Correct/Incorrect/Refusal score. To be correct, the AI generated answer had to be 100% correct. Any inaccurate information automatically resulted in an incorrect score. If the results had five points of reasoning, and only one of those was factually incorrect or hallucinated, the entire question was ruled incorrect. Something else to keep in mind when looking at the statistics.

Comparing AI and Traditional Searches

A key point of interest is how AI search results for statutes, regulations, and cases compare with traditional search methods. The study suggests that while the AI tools’ vector search technology is promising in retrieving relevant documents, the accuracy of the AI-generated responses is still a concern. Users must still verify the AI’s answers against the actual documents, much like traditional legal research. As my friend Tony Thai from Hyperdraft would jokingly say, “I mean we could just do the regular research and I don’t know…READ like we are paid to do.”

The Role of Vendors and the Future of Legal AI

Vendors like Westlaw and Lexis need to take this report seriously. The previous version of the study had its limitations, but the updated report addresses many of these deficiencies, providing a more comprehensive evaluation of the AI tools. While vendors may point out that the study is based on earlier versions of their products, the fundamental issues highlighted remain relevant.

The legal community has not abandoned these AI tools despite their flaws, primarily due to the decades of goodwill these vendors have built. The industry understands that this technology is still in its infancy. With the exception of CoCounsel, which was not directly evaluated in this study, these products have not reached their first birthday yet. This is very new technology which vendors and customers are wanting to get released quickly, and fix issues along the way. Vendors have the opportunity to engage in honest conversations with their customers about how they plan to improve these tools and reduce hallucinations. Vendors need to respect the customers and be forthcoming with issues they are working to correct. Creative advertising may create some short-term wins but may damage long-term relationships. Vendors are bending themselves into pretzels to advertise some version of “hallucination-free” on websites and marketing media when everyone knows that hallucinations in Generative AI tools is a feature, not a bug.

RAG as a Stop-Gap Measure

The study underscores that Retrieval-Augmented Generation (RAG) is a temporary solution. Something I have discussed with the vendors for a while now. For generative AI tools to truly excel, users need more sophisticated ways to manipulate their searches, refine retrieval processes, and interact with the AI. Currently, the system is basic and limited, but there is optimism that it will improve with time and consumer pressure. RAG is one of the first steps toward improving AI generated results for legal research, not the last.

Personal Reflections and Industry Implications

Reflecting on the broader implications, it is clear that the AI tools in their current form are not yet replacements for thorough, human-conducted legal research. The tools offer significant promise but must evolve to become more reliable and versatile. Legal professionals must remain vigilant in verifying AI-generated outputs and continue to advocate for advancements that will make these tools more robust and dependable. We are right in the middle of the Trough of Disillusionment on the Gartner Hype Cycle. This means that customers are expecting solid results, and while there is this unusual honeymoon with the legal industry when it comes to forgiving AI tools for hallucinations, this honeymoon will not last much longer. Lawyers are notorious for trying a product once, and if it works, they stick with it, and if it doesn’t, it may be months or years before they are willing to try it again. We are almost at that point in the industry with Generative AI tools that don’t reach their advertised abilities.

While the Stanford HAI report sheds light on the current state of legal AI research tools, it also provides a roadmap for future improvements. The legal community must work collaboratively with vendors to refine these Generative AI technologies, ensuring they become valuable assets in the practice of law.

In this episode of The Geek in Review, hosts Greg Lambert and Marlene Gebauer, along with special co-host Toby Brown of DV8 Legal Strategies, discuss the subscription-based legal services model with Mathew Kerbis, founder of Subscription Attorney, LLC, and Jack Shelton, co-founder of Aegis Space Law. The guests share their experiences and insights on adopting a subscription-based model as an alternative to the traditional billable hour.

Mathew Kerbis explains that his inspiration for transitioning to a subscription and flat fee model stemmed from the realization that many attorneys, including himself, could not afford their own billable rates. He emphasizes the importance of automating processes and leveraging technology to manage workflow and client needs efficiently, especially when offering lower-priced services.

Jack Shelton, whose firm focuses on the commercial space industry, shares his motivation for adopting a subscription model. He highlights the benefits of predictability and cost transparency for clients, particularly startups and small to medium-sized businesses. Shelton also discusses the development of in-house software to streamline complex analyses and improve accuracy and record-keeping for clients.

The guests explore the challenges and opportunities of scaling and sustaining a subscription-based model. Kerbis notes that the model is highly sustainable due to the predictability of monthly recurring revenue, while Shelton emphasizes the importance of refining processes and forms to ensure efficiency and effectiveness.

Looking to the future, Kerbis predicts that the rapid advancement of AI and the changing nature of legal practice will lead to a significant shift in the industry within the next five years. He suggests that if big law firms do not adapt, there may be a mass exodus of young associates seeking better work-life balance and the opportunity to start their own practices. Shelton, while agreeing with Kerbis to an extent, remains cautious about predicting a flood of lawyers leaving big law due to their risk-averse nature.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠


Contact Us: 
Twitter: ⁠⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠⁠, or ⁠⁠⁠⁠⁠@glambert⁠⁠⁠⁠⁠
Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠


Continue Reading Exploring the Subscription-Based Legal Services Model with Mathew Kerbis and Jack Shelton