Tony Thai and Ashley Carlisle of HyperDraft, return to The Geek in Review podcast to provide an update on the state of generative AI in the legal industry. It has been 6 months since their last appearance, when the AI Hype Cycle was on the rise. We wanted to get them back on the show to see where we are on that hype cycle at the moment.

While hype around tools like ChatGPT has started to level off, Tony and Ashley note there is still a lot of misinformation and unrealistic expectations about what this technology can currently achieve. Over the past few months, HyperDraft has received an influx of requests from law firms and legal departments for education and consulting on how to practically apply AI like large language models. Many organizations feel pressure from management to “do something” with AI, but lack a clear understanding of the concrete problems they aim to solve. This results in a solution in search of a problem situation.

Tony and Ashley provide several key lessons learned regarding limitations of generative AI. It is not a magic bullet or panacea – you still have to put in the work to standardize processes before automating them. The technology excels at research, data extraction and summarization, but struggles to create final, high-quality legal work product. If the issue being addressed is about standardizing processes or topics, then having the ability to create 50 different ways to answer the issue doesn’t create standards, it creates chaos.

Current useful applications center on legal research, brainstorming, administrative tasks – not mission-critical legal analysis. The hype around generative AI could dampen innovation in process automation using robotic process automation and expert systems. Casetext’s acquisition by Thomson Reuters illustrates the present-day limitations of large language models trained primarily on case law.

Looking to the near future, Tony and Ashley predict the AI hype cycle will continue to fizzle out as focus shifts to education and literacy around all forms of AI. More legal tech products will likely combine specialized AI tools with large language models. And law firms may finally move towards flat rate billing models in order to meet client expectations around efficiency gains from AI.

Listen on mobile platforms:  Apple Podcasts |  Spotify

Contact Us:

Twitter: ⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠, or ⁠⁠⁠⁠@glambert⁠⁠⁠⁠
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠

⁠⁠TranscriptContinue Reading You Still Need to Put in the Work: Hyperdraft’s Ashley Carlisle and Tony Thai on the AI Hype Cycle (TGIR Ep. 213)

The Geek in Review podcast welcomed Kriti Sharma, Chief Product Officer of Legal Tech at Thomson Reuters, to discuss AI and ethics in the legal industry. Kriti talks to us about the importance of diversity at Thomson Reuters and how it impacts product development. She explained TR’s approach to developing AI focused on augmenting human skills rather than full automation. Kriti also discusses the need for more regulation around AI and the shift towards human skills as AI takes on more technical work.

A major theme was the responsible development and adoption of AI tools like ChatGPT. She discusses the risks of bias but shared TR’s commitment to building trusted and ethical AI grounded in proven legal content. Through this “grounding” of the information, the AI produces reliable answers lawyers can confidently use and reduce the hallucinations that are prevalent in publicly commercial Gen AI tools.

Kriti shares her passion for ensuring people from diverse backgrounds help advance AI in law. She argues representation is critical in who develops the tech and what data trains it to reduce bias. Kriti explains that diversity of experiences and knowledge amongst AI creators is key to building inclusive products that serve everyone’s needs. She emphasizes Thomsons Reuters’ diversity across leadership, which informs development of thoughtful AI. Kriti states that as AI learns from its creators and data like humans do, we must be intentional about diverse participation. Having broad involvement in shaping AI will lead to technology that is ethical and avoids propagating systemic biases. Kriti makes a compelling case that inclusive AI creation is imperative for both building trust and realizing the full potential of the technology to help underserved communities.

Kriti Sharma highlights the potential for AI to help solve major societal challenges through her non-profit AI for Good. For example, democratizing access to helpful legal and mental health information. She spoke about how big companies like TR can turn this potential into actual services benefiting underserved groups. Kriti advocated for collaboration between industry, government and civil society to develop beneficial applications of AI.

Kriti founded the non-profit AI for Good to harness the power of artificial intelligence to help solve pressing societal challenges. Through AI for Good, Kriti has led the development of AI applications focused on expanding access to justice, mental healthcare, and support services for vulnerable groups. For example, the organization created the chatbot tool rAInbow to provide information and resources to those experiencing domestic violence. By partnering frontline organizations with technologists, AI for Good aims to democratize access to helpful services and trusted information. Kriti sees huge potential for carefully constructed AI to have real positive impact in areas like legal services for underserved communities.

Looking ahead, Kriti says coordinated AI regulations are needed globally. She calls for policymakers, companies and society to work together to establish frameworks that enable adoption while addressing risks. With the right balance, AI can transform legal services for the better.

Links:

Listen on mobile platforms:  Apple Podcasts |  Spotify

Contact Us:

Twitter: ⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠, or ⁠⁠⁠⁠@glambert⁠⁠⁠⁠
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠

⁠⁠Transcript

Continue Reading Thomson Reuters’ Kriti Sharma on Responsible AI: The Path to Trusted Tech in Law

In this episode of The Geek in Review podcast, host Marlene Gebauer and co-host Greg Lambert discuss cybersecurity challenges with guests Jordan Ellington, founder of SessionGuardian, Oren Leib, Vice President of Growth and Partnership at SessionGuardian, and Trisha Sircar, partner and chief privacy officer at Katten Muchin Rosenman LLP.

Ellington explains that the impetus for creating SessionGuardian came from working with a law firm to secure their work with eDiscovery vendors and contract attorney staffing agencies. The goal was to standardize security practices across vendors. Ellington realized the technology could provide secure access to sensitive information from anywhere. SessionGuardian uses facial recognition to verify a user’s identity remotely.

Leib discusses some alarming cybersecurity statistics, including a 7% weekly increase in global cyber attacks and the fact that law firms and insurance companies face over 1,200 attacks per week on average. Leib notes SessionGuardian’s solution addresses risks beyond eDiscovery and source code review, including data breach response, M&A due diligence, and outsourced call centers. Recently, a major North American bank told Leib that 10 of their last breach incidents were caused by unauthorized photography of sensitive data.

Sircar says law firms’ top challenges are employee issues, data retention problems, physical security risks, and insider threats. Regulations address real-world issues but can be difficult for global firms to navigate. Certifications show a firm’s commitment to security but continuous monitoring and updating of practices is key. When negotiating with vendors, Sircar recommends considering cyber liability insurance, audit rights, data breach responsibility, and limitations of liability.

Looking ahead, Sircar sees employee education as an ongoing priority, along with the ethical use of AI. Ellington expects AI will be used for increasingly sophisticated phishing and impersonation attacks, requiring better verification of individuals’ identities. Leib says attorneys must take responsibility for cyber defenses, not just rely on engineers. He announces SessionGuardian will offer free CLE courses on cybersecurity awareness and compliance.

The episode highlights how employee errors and AI threats are intensifying even as remote and hybrid work become standard. Firms should look beyond check-the-box compliance to make privacy and security central in their culture. Technology like facial recognition and continuous monitoring helps address risks, but people of all roles must develop competence and vigilance. Overall, keeping client data secure requires an integrated and ever-evolving approach across departments and service providers. Strong terms in vendor agreements and verifying partners’ practices are also key.

Listen on mobile platforms:  Apple Podcasts |  Spotify

Contact Us:

Twitter: ⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠, or ⁠⁠⁠⁠@glambert⁠⁠⁠⁠
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠

⁠⁠Transcript


Continue Reading Cybersecurity in the Remote Work Era: AI, Employees and an Integrated Defense – With SessionGuardian’s Jordan Ellington and Oren Leib, and Katten’s Trisha Sircar (TGIR Ep. 211)

For the Fourth of July week, we thought we’d do something fun and probably a little weird. Greg spoke with an AI guest named Justis for this episode. Justis, powered by OpenAI’s GPT-4, was able to have a natural conversation with Greg and provide insightful perspectives on the use of generative AI in the legal industry, specifically in law firms.

In the first part of their discussion, Justis gave an overview of the legal industry’s interest in and uncertainty around adopting generative AI. While many law firm leaders recognize its potential, some are unsure of how it fits into legal work or worry about risks. Justis pointed to examples of firms exploring AI and said letting lawyers experiment with the tools could help identify use cases.

Greg and Justis then discussed the challenges for the legal industry in using AI, like knowledge gaps, data issues, technology maturity, and managing change. They also talked about the upsides of using AI for tasks such as research, drafting, and review, including efficiency and cost benefits, as well as downsides like over-reliance on AI and ethical concerns.

The conversation turned to how AI could streamline law firm operations, with opportunities around scheduling, paperwork, billing, client insights, and more. However, Justis noted that human oversight is still critical. Justis and Greg also discussed how AI may impact legal jobs, creating demand for new skills and roles but aiming to augment human work rather than replace it.

Finally, Justis suggested innovations law firms could build with AI like research and drafting tools, analytics, dispute resolution systems, and project management. Justis emphasized that focusing on user needs, ethics, and change management will be key for successfully implementing AI. Looking ahead, Justis anticipated continuing progress in legal AI, regulatory changes, a focus on ethics, growing demand for AI skills, and AI becoming a competitive advantage for some firms.

While this was a “unique” episode for The Geek in Review, we hope it provided an insightful “conversation” about the current and future state of generative AI in the legal industry. There is significant promise but there are also challenges around managing change, addressing risks, and ensuring the responsible development of new AI tools. With the right focus and approach, law firms can start exploring ways to make the most of AI and gain a competitive edge. But they must make AI work for human professionals, not the other way around.

Listen on mobile platforms:  Apple Podcasts |  Spotify

Contact Us:

Twitter: ⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠, or ⁠⁠⁠⁠@glambert⁠⁠⁠⁠
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠

⁠⁠Transcript

Continue Reading A Literal Generative AI Discussion: How AI Could Reshape Law

Isha Marathe, a tech reporter for American Lawyer Media, joined the podcast to discuss her recent article on how deep fake technology is coming to litigation and whether the legal system is prepared. Deep fakes are hyper-realistic images, videos or audio created using artificial intelligence to manipulate or generate fake content. They are easy and inexpensive to create but difficult to detect. Marathe believes deep fakes have the potential to severely impact the integrity of evidence and the trial process if the legal system is unprepared.

E-discovery professionals are on the front lines of detecting deep fakes used as evidence, according to Marathe. However, they currently only have limited tools and methods to authenticate digital evidence and determine if it is real or AI-generated. Marathe argues judges and lawyers also need to be heavily educated on the latest developments in deep fake technology in order to counter their use in court. Regulations, laws and advanced detection technology are still lacking but urgently needed.

Marathe predicts that in the next two to five years, deep fakes will significantly start to affect litigation and pose risks to the judicial process if key players are unprepared. States will likely pass a patchwork of laws to regulate AI-generated images. Sophisticated detection software will emerge but will not be equally available in all courts, raising issues of equity and access to justice.

The two recent cases where parties claimed evidence as deep fakes highlight the issues at stake but did not dramatically alter the trial outcomes. However, as deep fake technology continues to rapidly advance, it may soon be weaponized to generate highly compelling and persuasive fake evidence that could dupe both legal professionals and jurors. Once seen, such imagery can be hard to ignore, even if proven to be false or AI-generated.

Marathe argues that addressing and adapting to the rise of deep fakes will require a multi-pronged solution: education, technology tools, regulations and policy changes. But progress on all fronts is slow while threats escalate quickly. Deep fakes pose an alarm for legal professionals and the public, dragging the legal system as a whole into an era of “post-truth.” Trust in the integrity of evidence and trial outcomes could be at stake. Overall, it was an informative if sobering discussion on the state of the legal system’s preparedness for inevitable collisions with deep fake technology.

Links

Listen on mobile platforms:  Apple Podcasts |  Spotify

Contact Us:

Twitter: ⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠, or ⁠⁠⁠⁠@glambert⁠⁠⁠⁠
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠

⁠⁠TranscriptContinue Reading The Rise of “Post-Truth” Litigation: ALM’s Isha Marathe on How Deep Fakes Threaten the Legal System (TGIR Ep. 209)

This week we bring in Christian Lang, the CEO and founder of LEGA, a company that provides a secure platform for law firms and legal departments to safely implement and govern the use of large language models (LLMs) like Open AI’s GPT-4, Google’s Bard, and Anthropic’s Claude. Christian talks with us about why he started LEGA, the value LEGA provides to law firms and legal departments, the challenges around security, confidentiality, and other issues as LLMs become more widely used, and how LEGA helps solve those problems.

Christian started LEGA after gaining experience working with law firms through his previous company, Reynen Court. He saw an opportunity to give law firms a way to quickly implement and test LLMs while maintaining control and governance over data and compliance. LEGA provides a sandbox environment for law firms to explore different LLMs and AI tools to find use cases. The platform handles user management, policy enforcement, and auditing to give firms visibility into how the technologies are being used.

Christian believes law firms want to use technologies like LLMs but struggle with how to do so securely and in a compliant way. LEGA allows them to get started right away without a huge investment in time or money. The platform is also flexible enough to work with any model a firm wants to use. As law firms get comfortable, LEGA will allow them to scale successful use cases across the organization.

On the challenges law firms face, Christian points to Shadow IT as people will find ways to use the technologies with or without the firm’s permission. Firms need to provide good options to users or risk losing control and oversight. He also discusses the difficulty in training new lawyers as LLMs make some tasks too easy, the coming market efficiencies in legal services, and the strategic curation of knowledge that will still require human judgment.

Some potential use cases for law firms include live chatbots, document summarization, contract review, legal research, and market intelligence gathering. As models allow for more tailored data inputs, the use cases will expand further. Overall, Christian is excited for how LLMs and AI can transform the legal industry but emphasizes that strong governance and oversight are key to implementing them successfully.

https://open.spotify.com/episode/4c2MXnFuGxZ7XczuvNJP68?si=fT0SYJdLSoOztJIVR–H1w

Listen on mobile platforms:  Apple Podcasts |  Spotify
Contact Us:

Twitter: ⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠, or ⁠⁠⁠⁠@glambert⁠⁠⁠⁠
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠

⁠Transcript⁠

Continue Reading Christian Lang on Governing the Rise of LLMs: How LEGA Provides a Safe Space for Law Firms to Use AI (TGIR Ep. 206)

We talk with Michael Bommarito, CEO of 273 Ventures and well-known innovator and thinker in legal technology and education. Bommarito and his colleague, Daniel Katz were behind GPT-3 and GPT-4 taking the Bar Exam. While he and Katz understand the hype in the media reaction, he states that most of the legal and technology experts who were following the advancements in generative AI, expected the results and had already moved on to the next phase in the use of AI in legal.

Michael Bommarito on his farm in Michigan, alongside his trusty friend Foggy.

While we talked to Michael a couple of days before the news broke about a lawyer in New York who submitted a brief to the court relying upon ChatGPT to write the brief and not understanding that AI tools can completely make up cases, fact pattern, and citations, he does talk about the fact that we are falling behind in educating law students and other in understanding how to use Large Language Models (LLMs) properly. In fact, if we don’t start teaching 1Ls and 2Ls in law school immediately, law schools will be doing a disservice for their students for many years to come.

Currently, Bommarito is following up his work at LexPredict, which was sold to Elevate Services in 2018, with 273 Ventures and Kelvin.Legal. With these companies, he aims to bring more efficiency and reduce marginal costs in the legal industry through the application of AI. He sees the industry as one that primarily deals with information and knowledge, yet continues to struggle with high costs and inefficiency. With 273 Ventures and Kelvin.Legal, he is building solutions to help firms bring order to the chaos that is their legal data.

AI and data offer promising solutions for the legal industry but foundational issues around education and adaptation must be addressed. Bommarito explains that decades of inefficiency and mismatched data need to be adjusted before the true value of the AI tools can be achieved. He also believes that while there might have been many false starts on adjustments to the billable hour through things like Alternative Fee Arrangements (AFAs) in the past, the next 12-36 months are going to be pivotal in shifting the business model of the legal industry.

Listen on mobile platforms:  Apple Podcasts |  Spotify

Contact Us: 

Twitter: ⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠, or ⁠⁠⁠⁠@glambert⁠⁠⁠⁠
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠

⁠Transcript⁠:


Continue Reading Michael Bommarito on Preparing Law Students for the Future, and His Quest on Bringing Order to the Chaos of Legal Data (TGIR Ep. 205)

Many of us have wondered when the big two legal information providers would jump into the Generative AI game, and it looks like LexisNexis is going public first with the launch of Lexis+ AI. We sat down with Jeff Pfeifer, Chief Product Officer for UK, Ireland, Canada, and US, and discuss the launch and what it means for LexisNexis going forward.

Pfeifer discusses how LexisNexis+ AI offers conversational search, summarization, and drafting tools to help lawyers work more efficiently. The tools provide contextual, iterative search so users can refine and improve results. The drafting tools can help with tasks like writing client emails or advisory statements. The summarization features can summarize datasets like regulations, opinions, and complaints.

LexisNexis is working with leading AmLaw 50 firms in a commercial preview program to get feedback and input on the AI tools. LexisNexis also launched an AI Insider Program for any interested firms to learn about AI and how it may impact their business. There is definitely demand for the AI Insider Program with over 3,000 law firms already signed up.

Pfeifer emphasizes LexisNexis’ focus on responsible AI. They developed and follow a set of AI principles to guide their product development, including understanding the real-world impact, preventing bias, explaining how solutions work, ensuring human oversight and accountability, and protecting privacy.

Pfeifer predicts AI tools like LexisNexis Plus AI will increase access to legal services by allowing lawyers and firms to work more efficiently and take on more work. He also sees opportunities to use the tools to help pro se litigants and courts. However, he cautions that the responsible and ethical development and use of AI is crucial.

Overall, Pfeifer believes AI will greatly improve efficiency and capacity in the legal profession over the next 2-5 years but that responsible adoption of the technology is key.

Links:

Listen on mobile platforms:  Apple Podcasts LogoApple Podcasts |  Spotify LogoSpotify
Contact Us:

Twitter: ⁠⁠@gebauerm⁠⁠, or ⁠⁠@glambert⁠⁠
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠Jerry David DeCicca⁠

Transcript

Continue Reading LexisNexis Bets Big on AI Transforming the Legal Industry: Jeff Pfeifer on the Launch of Lexis+ AI (TGIR Ep. 203)

In this riveting episode of our podcast, we delve into the fascinating world of AI in the legal industry with our esteemed guests, Nathan Walter and Bridget Albiero. Walter, a former attorney and founder of BriefPoint.ai, has leveraged his legal expertise and passion for technology to automate the manual processes that often bog down law firms. Bridget Albiero, a User Experience (UX) and User Design (UD) expert, underscores the significance of intuitive design in making these AI tools not just effective, but user-friendly.

Nathan Walter has been instrumental in creating BriefPoint.ai, a tool designed specifically for lawyers to eliminate the mundane and time-consuming aspects of law practice. With the goals of automating the litigation process from response to appeal, Walter and Albeiro are focused on removing the mundane tasks, such as typing, from the process and allowing the attorneys to focus more on their legal experience and expertise over the grunt work that takes up too much of their time already.

However, the success of such tools is not solely dependent on their technical capabilities. Bridget Albiero’s role in UX and UD ensures that these AI systems are designed with the end user in mind. With the mission to make legal professionals’ lives much easier. Bridget’s work is critical in crafting an interface that enables lawyers to accomplish more work in less time, truly maximizing the benefits of AI integration.

Nathan and Bridget’s collaboration epitomizes the intersection of law and technology. They argue that the advent of AI tools, such as BriefPoint.ai, will invariably put pressure on law firms to rethink their traditional billing models. Nathan anticipates a shift towards contingency fee-based and flat fee billings, spurred on by the increased efficiency AI brings to the table. In addition, the ability for the plaintiff’s lawyers to reduce the overall amount of work that they need to put into each case, there will be more incentives to take on work that they might otherwise not consider. This has a multitude of effects ranging from flooding courts with more and more cases, to overwhelming defense firms and corporations with a much higher litigation matters, to making working at plaintiff’s firms more attractive to associates who don’t want to work the number of hours they would need to do in BigLaw firms.

Bridget also emphasizes the importance of approaching AI with a balanced perspective. While there are a number of positives when it comes to AI in the legal process, there are also downsides that need to be considered as well. Bridget and Nathan run through some of those issues as well. This thoughtful conversation with Nathan and Bridget offers a unique insight into the future of the legal industry, where AI and human ingenuity work hand in hand. Listen in to learn more about how these changes might soon be reshaping the legal landscape.

Listen on mobile platforms:  Apple Podcasts LogoApple Podcasts |  Spotify LogoSpotify

Twitter: ⁠⁠@gebauerm⁠⁠, or ⁠⁠@glambert⁠⁠
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠Jerry David DeCicca⁠

Transcript

Continue Reading Lawyer vs. AI or Lawyers + AI: Embracing the Future of Legal Practice with BriefPoint.ai’s Nathan Walter and Bridget Albiero (TGIR Ep. 202)

In this episode of The Geek in Review Podcast, hosts Greg Lambert and Marlene Gebauer interview Richard Tromans, founder of Tromans Consulting and artificiallawyer.com. Tromans shares his insights on the future of legal innovation and the upcoming Legal Innovators California conference, scheduled to take place on June 7-8 in San Francisco.

Tromans begins the conversation by highlighting the role of artificial intelligence (AI) in the legal industry. He emphasizes the importance of not only adopting AI but also using it to its full potential to deliver better legal services. He also discusses the potential impact of AI on law firm business models.

Moving on to the topic of alternative legal service providers (ALSPs), Tromans examines their role in the legal industry and how it has evolved over time. He believes that the future of ALSPs depends on their ability to embrace technology and shift their focus from being mere “bodyshops” to incorporating more sophisticated technology and consulting services.

The discussion then moves on to the Legal Innovators California conference. Tromans shares his views on what attendees can expect, including insights into the latest legal innovation trends, opportunities for cross-fertilization between private practice and in-house legal teams, and exposure to a variety of ALSPs.

Tromans also shares information on his own platform, artificiallawyer.com, which provides news, features, and educational videos related to legal innovation, and the upcoming conference. He invites listeners to check out the conference website, legalinnovatorscalifornia.com, for more information.

Tromans emphasized the need for the legal industry to shift its focus from traditional metrics like profits and risk reduction to a more holistic approach that considers broader outcomes. He believes that this shift will take time, but he is hopeful that the Legal Innovators California conference and similar events will pave the way for the industry to move forward in this direction.

Listen on mobile platforms:  Apple Podcasts LogoApple Podcasts |  Spotify LogoSpotify

Contact Us:

Twitter: ⁠⁠@gebauerm⁠⁠, or ⁠⁠@glambert⁠⁠
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠Jerry David DeCicca⁠

Transcript

Continue Reading Richard Tromans on the Future of Legal Innovation and The Legal Innovators California Conference (TGIR Ep. 201)