This week we catch up with Jeff Pfeifer and Serena Wellen from LexisNexis to discuss the rapid development of AI tools for the legal industry over the past year. Pfeifer and Wellen give us an insider’s view of what it took to bring their Lexis+ AI tool to the market and the balance between speed to market and getting solid customer guidance on what they need in a legal-focused Generative AI tool. Between the initial version released to a select group of customers and the current version, the product grew from an open-ended chat interface into more of a guided resource that helps users on creating and following up on prompts. As with most AI tools created in the past year, there is still more potential as more and more customers use it and give critical feedback along the way.

In addition to Lexis+ AI, LexisNexis has now launched two additional AI products – Lexis Snapshot and Lexis Create. Lexis Snapshot summarizes legal complaints to help firms monitor litigation. Lexis Create brings AI capabilities directly into Microsoft Word to assist with drafting and research while lawyers are working on documents. The goal is to embed insights where lawyers are actually doing their work rather than separate AI tools.

While the focus of the initial Generative AI tools from LexisNexis were focused on the US market, Serena Wellen and her team are busy expanding that to more of an international reach. This requires adapting the models, content, and interface to different languages and legal systems. This is complex undertaking, but Wellen discusses how LexisNexis has content and editors around the world to help customize the tools. Surprisingly, desired use cases are fairly consistent globally – both simple legal tasks as well as more advanced legal research and drafting.

Greg Lambert brings up a recent LinkedIn discussion that he had with Microsoft’s Jason Barnwell, where Barnwell told him that today’s version of Generative AI tools are “the worst these things will ever be.” In response, Pfeifer says that LexisNexis is focused on continuously improving answer quality to build trust and prove the value of AI to skeptical lawyers. LexisNexis is leveraging relationships with companies like Microsoft to reinforce the stability and progress being made.

Wellen and Pfeifer look into the future and predicted that AI assistants will become highly personalized to individual lawyers. AI agents will also proliferate across platforms to help automate tasks and workflows. Law firms will likely accelerate their adoption of AI tools based on rising expectations and demands from corporate legal departments to work more efficiently.

Listen on mobile platforms:  ⁠Apple Podcasts⁠ |  ⁠Spotify⁠ | YouTube

⁠⁠⁠⁠Contact Us: 

Twitter: ⁠⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠⁠, or ⁠⁠⁠⁠⁠@glambert⁠⁠⁠⁠⁠
Threads: @glambertpod or @gebauerm66
Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠

Transcript

Continue Reading Pfeifer and Wellen Give an Inside Look at LexisNexis’ AI Sprint (TGIR Ep. 230)

Vanderbilt Law School recently launched an exciting new initiative called the Vanderbilt AI Legal Lab (VAILL) to explore how artificial intelligence can transform legal services and access to justice. In this episode, we spoke with VAILL’s leadership – Cat Moon,(👑) Director of Innovation at Vanderbilt’s Program on Law and Innovation (PoLI), and Mark Williams, Associate Director for Collections and Innovation at the Massey Law Library – about their vision for this pioneering lab. 

VAILL’s mission is to harness AI to expand access to legal knowledge and services, with a particular focus on leveraging generative AI to improve legal service delivery. As Moon described, VAILL aims to experiment, collaborate widely, and build solutions to realize AI’s potential in the legal domain. The lab will leverage Vanderbilt’s cross-disciplinary strengths, drawing on experts in computer science, engineering, philosophy, and other fields to inform their ethically-grounded, human-centered approach.

VAILL is prioritizing partnerships across sectors – courts, law firms, legal aid organizations, alternative providers, and others – to test ideas and develop prototype AI applications that solve real legal needs. For instance, they plan to co-create solutions with Legal Aid of North Carolina’s Innovation Lab to expand access to justice. Moon explained that generative AI presents solutions for some legal challenges, so VAILL hopes to match developing technological capabilities with organizations’ needs.

Ethics are foundational to VAILL’s work. Students will learn both practical uses of AI in law practice as well as broader policy and social implications. As Williams emphasized, beyond core professional responsibility issues, VAILL aims to empower students to lead in shaping AI’s societal impacts through deeper engagement with questions around data, access, and algorithms. Teaching ethical, creative mindsets is VAILL’s ultimate opportunity.

VAILL will leverage the resources and expertise of Vanderbilt’s law librarians to critically assess new AI tools from their unique perspective. Williams noted that the lab sees law students as a “risk free” testing ground for innovations, while also equipping them with adaptable learning capabilities to keep pace with AI’s rapid evolution. Rather than viewing AI as a differentiator, VAILL’s goal is producing legally-skilled innovators ready to thrive amidst ongoing change.

Vanderbilt’s AI Legal Lab represents an exciting development in exploring AI’s legal impacts. By emphasizing human-centered, ethical approaches and collaborations, VAILL aims to pioneer solutions that expand access to legal knowledge and services for all. We look forward to seeing the innovative applications VAILL develops at the intersection of law and AI.

Listen on mobile platforms:  ⁠Apple Podcasts⁠ |  ⁠Spotify⁠ | YouTube

⁠⁠Contact Us: 

Twitter: ⁠⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠⁠, or ⁠⁠⁠⁠⁠@glambert⁠⁠⁠⁠⁠
Threads: @glambertpod or @gebauerm66
Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠

Transcript

Continue Reading TGIR Ep. 228 – Cat Moon and Mark Williams Launch the New Vanderbilt AI Law Lab (VAILL)

On a special “on location” episode of The Geek in Review, Greg Lambert sits down with vLex’s Damien Riehl for a hands-on demonstration of the new generative AI tool called Vincent AI. While at the Ark KM Conference, Riehl explains that vLex has amassed a huge legal dataset over its 35 year history which allows them to now run their own large language models (LLM). The recent merger between vLex and Fastcase has combined their datasets to create an even more robust training corpus.

Riehl demonstrates how Vincent AI works by having it research a question on trade secret law and employee theft of customer lists. It retrieves relevant cases, statutes, regulations, and secondary sources, highlighting the most relevant passages. It summarizes each source and provides a confidence rating on how well each excerpt answers the initial question. Vincent AI then generates a legal memorandum summarizing the relevant law. Riehl explains how this is more trustworthy than a general chatbot like ChatGPT because it is grounded in real legal sources.

Riehl shows how Vincent AI can compare legal jurisdictions by generating memorandums on the same question for California, New York, the UK, and Spain. It can even handle foreign language sources, translating them into English. This allows for efficient multi-jurisdictional analysis. Riehl emphasizes Vincent AI’s focus on asking straightforward questions in natural language rather than requiring complex prompts.

Looking ahead, Riehl sees potential for Vincent AI to leverage external LLMs like Anthropic’s Claude model as well as their massive dataset of briefs and motions to generate tailored legal arguments statistically likely to persuade specific judges on particular issues. He explains this requires highly accurate tagging of documents which they can achieve through symbolic AI. Riehl aims to continue expanding features without requiring lawyers to become AI prompt engineers.

On access to justice, Riehl believes AI can help legal aid and pro bono attorneys handle more matters more efficiently. He also sees potential for AI assistance to pro se litigants to promote fairer outcomes. For judges, AI could help manage pro se cases and expedite decision-making. Overall, Riehl is optimistic about AI augmenting legal work over the next two years through ongoing improvements.

Riehl discusses vLex’s new Vincent AI system and its ability to efficiently research legal issues across jurisdictions and across languages. He provides insight into the technology’s development and potential while emphasizing understandable user interaction. The conversation highlights AI’s emerging role in legal services to increase productivity, insight, and access to justice.

Listen on mobile platforms:  ⁠Apple Podcasts⁠ |  ⁠Spotify⁠ | YouTube (NEW!)

Links:

vLex Vincent AI

⁠⁠Contact Us: 

Twitter: ⁠⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠⁠, or ⁠⁠⁠⁠⁠@glambert⁠⁠⁠⁠⁠

Threads: @glambertpod or @gebauerm66

Email: geekinreviewpodcast@gmail.com

Music: ⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠

⁠Transcript


Continue Reading vLex’s Damien Riehl on Examining vLex’s New Vincent AI (TGIR Ep. 227)

This episode of The Geek in Review podcast provides an in-depth look at how the AI assistant Paxton, created by Tanguy Chau and Mike Ulin, is transforming legal work. The hosts speak with the founders of Paxton to explore the pain points their technology aims to solve and how generative AI can enhance lawyers’ capabilities.

Tanguy and Mike discuss their backgrounds in AI, regulatory compliance, venture capital, and management consulting. This diverse experience informed their vision for Paxton as an AI assistant specifically built for legal and compliance professionals. They explain that Paxton is trained on millions of legal documents and regulations, allowing it to search this vast knowledge and retrieve highly relevant information rapidly. A key feature they highlight is Paxton’s accuracy in citing sources, with every sentence linked back to the original text.

One of the key features of Paxton is that it can automate repetitive, low-value legal work to make lawyers more efficient. Tanguy notes that tasks like reviewing thousands of sales contracts clause-by-clause or compiling 50-state surveys that once took weeks can now be done by Paxton in minutes. Mike discusses Paxton’s advanced document comparison capabilities that go beyond keyword matching to understand meaning and intent. This allows quick, substantive analysis of contracts, marketing materials, and more.

Exploring the future, Mike predicts that like software developers, lawyers who embrace AI will become much more productive. But higher-level strategic thinking will remain uniquely human. Tanguy shares an analogy of a human on a bicycle outpacing a condor, the most efficient animal. He believes combining human creativity with AI tools like Paxton will enable radically new levels of efficiency and capability.

Paxton.AI’s Tanguy and Mike make a compelling case that AI-powered tools such as Paxton will fundamentally transform legal work. By automating repetitive tasks, AI will free lawyers to focus on high-value, client-facing work. Overall, this episode provides great insights into how generative AI may soon become indispensable for legal professionals seeking to improve their productivity and capabilities.

As a special treat, we wrap up the interview with a demonstration of Paxton.AI’s capabilities. (YouTube only)

Links:

Paxton AI (try the Beta for free)

Forbes Article: Unlocking The 10x Lawyer: How Generative AI Can Transform The Legal Landscape

Using Generative AI to analyze the 45 page Trump Indictment using Paxton AI

Unveiling Paxton AI’s Newest Features: Boolean Composer and Document Compare

Instantly Analyzing the Congressional UFO Hearing with Generative AI powered by Paxton AI

Transcript:Continue Reading Paxton.AI’s Tanguy Chau & Michael Ulin: How AI Allows Legal Work to Soar to New Heights (TGIR Ep. 220)

On this episode of The Geek in Review podcast, hosts Marlene Gebauer and Greg Lambert interview Kris Martin, Executive Vice President at Harbor, to discuss innovation, AI, and the future of technology in the legal industry.

They open the show by talking about Harbor’s upcoming LINKS conference and its keynote focused on exploring human potential in the age of AI. Kris provides background on how the conference originated from an annual survey Harbor conducts to take the pulse of the legal community. He explains Harbor’s goal is to support legal organizations through insights, research, and events like the LINKS conference.

The discussion moves to AI trends and Jean O’Grady and Harbor’s strategic Start/Stop Survey results. The survey reveals 93% of firms are actively exploring AI tools, with top vendors being Casetext and Thomson Reuters. Kris emphasizes law librarians play a crucial role in evaluating new AI technologies and guiding procurement decisions as firms adopt these tools.

However, with tight budgets at most firms, Kris Martin points out it’s also important to audit usage data of existing resources. This helps inform negotiations with vendors to find cost savings on research contracts. He notes legal research tools will inevitably integrate more tightly into lawyers’ daily workflows in the future. To facilitate this, Kris introduces the concept of “citizen coders” – enabling lawyers to articulate their needs and processes in technology and code terms rather than just legal terms.

Shifting gears, Kris elaborates on Harbor’s recent rebranding from HBR Consulting, which brought six previously merged companies together under one unified brand and mission. He explains that true integration requires deep alignment across the merged organizations on culture, vision, and values. This allows Harbor to provide end-to-end solutions for legal organizations.

Looking to the future, Kris predicts legal research will become even more embedded into lawyer workflows over the next 2-5 years, rather than a separate step. He sees law libraries’ roles evolving as AI capabilities increasingly integrate into legal processes and tools. Rather than going to a separate platform for research, lawyers will access these AI-enhanced tools in their daily workflows.

Overall, the wide-ranging discussion provides insights into Harbor’s efforts to support the legal community through conferences, research, and integrating emerging technologies. Kris highlights the importance of law librarians evaluating and implementing new AI tools while also managing costs through audits. He leaves listeners with an optimistic vision for the future where legal research and lawyers’ needs are more tightly connected through technology.

Links:

Listen on mobile platforms:  ⁠Apple Podcasts⁠ |  ⁠Spotify⁠ | YouTube (NEW!)
Contact Us:
Twitter: ⁠⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠⁠, or ⁠⁠⁠⁠⁠@glambert⁠⁠⁠⁠⁠
Threads: @glambertpod or @gebauerm66
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠
Transcript

Continue Reading LINKS, Citizen Coders & Auditing Shelfware: Insights from Harbor’s Kris Martin

This week we are joined by Brandon Wiebe, General Counsel and Head of Privacy at Transcend. Brandon discusses the company’s mission to develop privacy and AI solutions. He outlines the evolution from manual governance to technical solutions integrated into data systems. Transcend saw a need for more technical privacy and AI governance as data processing advanced across organizations.

Wiebe provides examples of AI governance challenges, such as engineering teams using GitHub Copilot and sales/marketing teams using tools like Jasper. He created a lightweight AI Code of Conduct at Transcend to give guidance on responsible AI adoption. He believes technical enforcement like cataloging AI systems will also be key.

On ESG governance changes, Wiebe sees parallels to privacy regulation evolving from voluntary principles to specific technical requirements. He expects AI governance will follow a similar path but much faster, requiring legal teams to become technical experts. Engaging early and lightweight in development is key.

Transcend’s new Pathfinder tool provides observability into AI systems to enable governance. It acts as an intermediary layer between internal tools and foundation models like OpenAI. Pathfinder aims to provide oversight and auditability into these AI systems.

Looking ahead, Wiebe believes GCs must develop deep expertise in AI technology, either themselves or by building internal teams. Understanding the technology will allow counsels to provide practical and discrete advice as adoption accelerates. Technical literacy will be critical.

Listen on mobile platforms:  ⁠Apple Podcasts⁠ |  ⁠Spotify⁠ | YouTube (NEW!)

Contact Us:

Twitter: ⁠⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠⁠, or ⁠⁠⁠⁠⁠@glambert⁠⁠⁠⁠⁠
Threads: @glambertpod or @gebauerm66
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠

Transcript

Continue Reading Observing the Black Box: Transcend’s Brandon Wiebe’s Insights into Governing Emerging AI Systems (TGIR Ep. 218)

In this episode of The Geek in Review, hosts Greg Lambert and Marlene Gebauer interview three guests from UK law firm Travers Smith about their work on AI: Chief Technology Officer Oliver Bethell, Director of Legal Technology Shawn Curran, and AI Manager Sam Lansley. They discuss Travers Smith’s approach to testing and applying AI tools like generative models.

A key focus is finding ways to safely leverage AI while mitigating risks like copyright issues and hallucination. Travers Smith built an internal chatbot called YCNbot to experiment with generative AI through secure enterprise APIs. They are being cautious on the generative side but see more revolutionary impact from reasoning applications like analyzing documents.

Travers Smith has open sourced tools like YCNbot to spur responsible AI adoption. Collaboration with 273 Ventures helped build in multi-model support. The team is working on reducing dependence on manual prompting and increasing document analysis capabilities. They aim to be model-agnostic to hedge against reliance on a single vendor.

On model safety, Travers Smith emphasizes training data legitimacy, multi-model flexibility, and probing hallucination risks. They co-authored a paper on subtle errors in legal AI. Dedicated roles like prompt engineers are emerging to interface between law and technology. Travers Smith is exploring AI for tasks like contract review but not yet for work product.

When asked about the crystal ball for legal AI, the guests predicted the need for equitable distribution of benefits, growth in reasoning applications vs. generative ones, and movement toward more autonomous agents over manual prompting. Info providers may gain power over intermediaries applying their data.

This wide-ranging discussion provides an inside look at how one forward-thinking firm is advancing legal AI in a prudent and ethical manner. With an open source mindset, Travers Smith is exploring boundaries and sharing solutions to propel the responsible use of emerging technologies in law.

Links:

Listen on mobile platforms:  ⁠Apple Podcasts⁠ |  ⁠Spotify⁠ | YouTube (NEW!)

Contact Us:

Twitter: ⁠⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠⁠, or ⁠⁠⁠⁠⁠@glambert⁠⁠⁠⁠⁠
Threads: @glambertpod or @gebauerm66
Voicemail: 713-487-7821 Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠

⁠⁠TranscriptContinue Reading Deploying Cutting-Edge Legal AI: Travers Smith’s Cautious, But Open-source Approach. (TGIR Ep. 216)

This is part 3 in a 3 part series.  Part 1 questions Goldman’s Sachs data showing that 44% of of legal tasks could be replaced by Generative AI.  In Part 2, we find some better data and estimate an upper limit of 23.5% of revenue that could be reduced by Generative AI. All of our assertions and assumptions will be discussed in further detail in a free LVN Webinar on August 15th.

The Big Idea:  We apply reductions in hours due to Generative AI to a few matters to determine Generative AI’s potential effect on profitability.
Key Points:
  • We establish a baseline sample matter and compare changes to that sample matter when Generative AI is applied
  • We explore how leverage is affected by Generative AI and how those changes may affect profitability in unexpected ways

Determining Generative AI’s effect on law firm profitability requires a bit more than a “back of the napkin” calculation with rough percentages based on keywords in time entries, as we did when roughly calculating the effect on revenue.

As Toby pointed out at the end of the last post, Generative AI is unlikely to hit all timekeepers equally.

We begin with this assertion.

Generative AI will disproportionately impact non-partner hours.

We are comfortable making this assertion for two reasons:

  1. Generative AI, in its current state, is most likely to replace or shorten the time to complete lower complexity and lesser specialized tasks that should be performed at the associate or paralegal level.
  2. Any time legal work hours are reduced, Partners tend to protect their own hours.

With that in mind, Toby began a profitability analysis, beginning with a baseline sample matter that does not factor in any use of Generative AI. We will use this baseline to compare against our AI adjusted matters.


Baseline M&A Sample Matter Data

Our baseline sample matter is loosely modeled on an M&A transaction and includes 5 timekeepers:

  • an Equity Partner
  • a 17th year service partner
  • 10th, 7th and 3rd year associates
TK Hours Rate Realization Revenue Expense Profit
EP 80 $1,000 88% $70,400 $15,200 $55,200
SP17 100 $895 88% $78,760 $48,500 $30,260
10yr 125 $735 88% $80,850 $48,750 $32,100
7yr 90 $660 88% $52,272 $32,400 $19,872
3yr 55 $595 88% $28,798 $18,150 $10,648

Estimated Annual Profit Per Equity Partner (PPEP) – $1,851 X 1400 hrs = $2,591,400

Leverage – 60% Non-Partner Hours


There are, of course, a number of assumptions in this baseline data that could greatly change from firm to firm, including the billable rates, the realization rate, and the expense for each timekeeper. However, we will keep this baseline data consistent across all of our examples in order to make a fair comparison. With different rates, realization, and expenses you will get different results. We strongly encourage every firm to perform a similar calculation for themselves.

Baseline Matter Analysis

The total hours billed are 450. The total revenue is $311k and the total profit in dollars is $163k.

Our model then translates the profit on this one matter into an estimated PPEP number for the firm. This is so we can determine profit margin impact separate from profit dollars.

In this baseline model, the PPEP number is ~$2.6m; meaning that if all work at this firm were staffed and billed like this one matter, the firm average PPEP would be about $2.6m.

Leverage

There’s an old adage in economic circles: “Workers Work. Owners Benefit.”
Continue Reading AI-Pocalypse: The Shocking Impact on Law Firm Profitability

by 3 Geeks (Ryan McClead, Greg Lambert, and Toby Brown)

This is part 2 in a 3 part series. The first part is here. Part 3 is here.

The Big Idea: We found a much better dataset, though still small, from which to extrapolate actual effects of Generative AI on the legal industry.

Key takeaways:

  • We got anonymized and summarized data for 10 corporate legal departments from LexisNexis CounselLink
  • The data showed that almost 40% of time entries, representing 47% of billings, could potentially use Generative AI.
  • We estimate that a realistic initial upper limit for Generative AI would be to reduce that work by half, or 20% of time entries and 23.5% of revenue

In the previous post, Ryan got tired of hearing the Goldman Sachs “44% of Legal is going away” stat being quoted uncritically and decided to actually look into the underlying data used in their report. Ryan’s exploration of the data is an interesting story in and of itself, but the bottom line is that the data is fuzzy at best, the sample size is laughable, and the breathlessly unquestioning reporting on Goldman’s study has been remarkably sloppy.

After writing up his findings, Ryan shared that post with Greg and Toby, and the question quickly arose, “can we find some actual, useful data to better understand the effect that Generative AI might actually have on law firms?” Gregreached out to Kris Satkunas from LexisNexis CounselLink, a recent interviewee on the Geek in Review, to see if CounselLink could share some anonymized benchmark data for us to analyze.

LexisNexis CounselLink Data

As a reminder the Goldman data was using survey questions about how important certain “work tasks” were for their jobs. Those tasks included things like “Getting Information”, “Identifying Objects, Actions, and Events”, and “Scheduling Work and Activities”. These are quite vague and wide open to interpretation.

In an attempt to find more useful data for our purposes, we asked Kris for the percentages of all time entries that included the keywords “Draft” or “Review” in the description. Our assumption is that those two terms will capture a large percentage of actual time entries in which lawyers are likely to use Generative AI. We fully recognize that this simple heuristic will not produce a clean data set from which to extrapolate definitive results, but as a first pass at some real data, we believe this gives us a nice estimate of tasks that could potentially be ripe for automation with Generative AI.
Continue Reading Generative AI Could Reduce Law Firm Revenue by 23.5%

This is the first in a 3-part blog post, it first appeared on The Sente Playbook.  The other 2 posts are co-authored by Toby Brown and Greg Lambert and will follow later this week. Apologies for the length of this post, but I was channeling my inner Casey Flaherty.
The Big Idea:  The data that Goldman used is insufficient to make the claims about Generative AI’s effect on legal that their report did.
Key Take-Aways:
  • Reporting about this report is sloppy
  • Reporting within this report is sloppy
  • The underlying data doesn’t tell us much meaningful
  • 3 Geeks attempts to find meaningful data
On March 26th, 2023 Goldman Sachs sent shockwaves through the legal industry by publishing a report claiming that 44% of “something” in the Legal Industry was going to be replaced by Generative AI.  I didn’t question that stat at the time, because it sounded about right to me.  I suspect that was true for most people who know the legal industry.  As I’ve heard this stat repeated by multiple AI purveyors actively scaring lawyers into buying their products or services, I eventually started to question its validity.
I started by looking into the press coverage of that 44% number and was immediately confused.  (All emphasis below added by me.)

Law.com  – March 29, 2023
Generative AI Could Automate Almost Half of All Legal Tasks, Goldman Sachs Estimates
“Goldman Sachs estimated that generative AI could automate 44% of legal tasks in the U.S. “

Observer – March 30, 2023
Two-Thirds of Jobs Are at Risk: Goldman Sachs A.I. Study
“The investment bank’s economists estimate that 46% of administrative positions, 44% of legal positions, and 37% of engineering jobs could be replaced by artificial intelligence.

NY Times – April 10, 2023
A.I. Is Coming for Lawyers, Again
“Another research report, by economists at Goldman Sachs, estimated that 44 percent of legal work could be automated.”

Okay, so which is it?  Generative AI is going to replace 44% of legal tasks, positions, or work?
Because those are 3 very different things; each of which would have extremely different impacts on the industry if they came to pass.  Lest you think I cherry-picked three outlying articles, go ahead and Google “AI Replace 44% Legal Goldman Sachs” and see what you get.  Those 3 articles are in my top 5 results.
My top result as of this writing is a news article from IBL News, writing last Tuesday that Goldman says,  “AI could automate 46% of tasks in administrative jobs, 44% of legal jobs, and 37% of architecture and engineering professions.”
We should probably just go back to what the Goldman Sachs report actually said and then we can chalk this up to lazy tech journalism.  Well, not so fast.  Because while the Goldman researchers clearly say “current work tasks” (see below) even that begins to fall apart once you dig into the underlying data.

What Goldman Sachs actually said in the report

Continue Reading 44% of Investment Bankers Think They Can Make Lots of Money Off of Attorney Insecurity (AI)