This week we are joined by Brandon Wiebe, General Counsel and Head of Privacy at Transcend. Brandon discusses the company’s mission to develop privacy and AI solutions. He outlines the evolution from manual governance to technical solutions integrated into data systems. Transcend saw a need for more technical privacy and AI governance as data processing advanced across organizations.

Wiebe provides examples of AI governance challenges, such as engineering teams using GitHub Copilot and sales/marketing teams using tools like Jasper. He created a lightweight AI Code of Conduct at Transcend to give guidance on responsible AI adoption. He believes technical enforcement like cataloging AI systems will also be key.

On ESG governance changes, Wiebe sees parallels to privacy regulation evolving from voluntary principles to specific technical requirements. He expects AI governance will follow a similar path but much faster, requiring legal teams to become technical experts. Engaging early and lightweight in development is key.

Transcend’s new Pathfinder tool provides observability into AI systems to enable governance. It acts as an intermediary layer between internal tools and foundation models like OpenAI. Pathfinder aims to provide oversight and auditability into these AI systems.

Looking ahead, Wiebe believes GCs must develop deep expertise in AI technology, either themselves or by building internal teams. Understanding the technology will allow counsels to provide practical and discrete advice as adoption accelerates. Technical literacy will be critical.

Listen on mobile platforms:  ⁠Apple Podcasts⁠ |  ⁠Spotify⁠ | YouTube (NEW!)

Contact Us:

Twitter: ⁠⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠⁠, or ⁠⁠⁠⁠⁠@glambert⁠⁠⁠⁠⁠
Threads: @glambertpod or @gebauerm66
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠

Transcript

Continue Reading Observing the Black Box: Transcend’s Brandon Wiebe’s Insights into Governing Emerging AI Systems (TGIR Ep. 218)

This week on The Geek in Review, hosts Greg Lambert and Marlene Gebauer spoke with Katie DeBord and Kristin Zmrhal, two vice presidents from legal tech company DISCO. Greg kicked off the episode by discussing his recent work with a Houston nonprofit called Project Remix Ventures that helps at-risk youth. He took their leader on a visit to innovation hub The Ion to showcase reinventing old spaces for new purposes, like DISCO has done with legal tech. The hosts then welcomed Katie DeBord, who moved from being Chief Innovation Officer at law firm Bryan Cave Leighton Paisner to DISCO. In her current role, Katie focuses on leveraging technology like AI to improve the litigation process for lawyers. She drew experience from her past analyst role at the CIA, where she honed her skills in synthesizing complex data sources.

The hosts also introduced Kristin Zmrhal, who has over 20 years of experience in the legal tech space. At DISCO, she helped build their eDiscovery products and services. Kristin explained that DISCO’s vision is to create great legal technology that helps lawyers find evidence faster. Their product suite now covers the entire litigation lifecycle, from intake to discovery to case management. DISCO uses AI tools like their new Celia application to automatically surface insights from case documents, allowing lawyers to review documents more efficiently. They are also careful to cite sources to ensure transparency.

In terms of company culture, Katie and Kristin discussed how DISCO values rapid experimentation, quick decision-making, and collaborating as a team. They also emphasize empathy in how they treat each other and design products for users. Being a public company also gives employees a sense of ownership. On the innovation side, Katie sees billable hours changing due to advancing legal technology, which will impact law firm profitability models. Kristin predicts AI adoption will reach a tipping point in legal tech within 2-5 years, drastically improving processes like eDiscovery. However, regulating AI poses challenges for the legal industry.

For giving back, DISCO has community service and pro bono programs. DISCO Cares allows employees to volunteer locally. Through DISCO Pro Bono, they donate their technology to support pro bono legal matters. This aligns with their mission of making legal services more accessible. When asked for parting thoughts, Katie emphasized lawyers needing to leverage professionals from adjacent disciplines as part of their teams. Kristin reiterated that this is the most exciting time in her 20 year legal tech career, with AI poised to transform legal workflows.

This engaging discussion provided insights into DISCO’s innovative products and empathetic culture. With seasoned experts like Katie and Kristin leading the way, DISCO seems well-positioned to help shape the future of legal technology. Listeners can connect with Katie and Kristin on LinkedIn and find out more about DISCO’s offerings at csdisco.com. Be sure to stay tuned to The Geek in Review for more insights from leaders in legal tech.

Listen on mobile platforms:  ⁠Apple Podcasts⁠ |  ⁠Spotify⁠ | YouTube (NEW!)

Contact Us:

Twitter: ⁠⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠⁠, or ⁠⁠⁠⁠⁠@glambert⁠⁠⁠⁠⁠
Threads: @glambertpod or @gebauerm66
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠

⁠⁠TranscriptContinue Reading Fast, Smart, and Empathetic: How DISCO’s Culture Drives Legal Tech Innovation (TGIR Ep. 217)

In this episode of The Geek in Review, hosts Greg Lambert and Marlene Gebauer interview three guests from UK law firm Travers Smith about their work on AI: Chief Technology Officer Oliver Bethell, Director of Legal Technology Shawn Curran, and AI Manager Sam Lansley. They discuss Travers Smith’s approach to testing and applying AI tools like generative models.

A key focus is finding ways to safely leverage AI while mitigating risks like copyright issues and hallucination. Travers Smith built an internal chatbot called YCNbot to experiment with generative AI through secure enterprise APIs. They are being cautious on the generative side but see more revolutionary impact from reasoning applications like analyzing documents.

Travers Smith has open sourced tools like YCNbot to spur responsible AI adoption. Collaboration with 273 Ventures helped build in multi-model support. The team is working on reducing dependence on manual prompting and increasing document analysis capabilities. They aim to be model-agnostic to hedge against reliance on a single vendor.

On model safety, Travers Smith emphasizes training data legitimacy, multi-model flexibility, and probing hallucination risks. They co-authored a paper on subtle errors in legal AI. Dedicated roles like prompt engineers are emerging to interface between law and technology. Travers Smith is exploring AI for tasks like contract review but not yet for work product.

When asked about the crystal ball for legal AI, the guests predicted the need for equitable distribution of benefits, growth in reasoning applications vs. generative ones, and movement toward more autonomous agents over manual prompting. Info providers may gain power over intermediaries applying their data.

This wide-ranging discussion provides an inside look at how one forward-thinking firm is advancing legal AI in a prudent and ethical manner. With an open source mindset, Travers Smith is exploring boundaries and sharing solutions to propel the responsible use of emerging technologies in law.

Links:

Listen on mobile platforms:  ⁠Apple Podcasts⁠ |  ⁠Spotify⁠ | YouTube (NEW!)

Contact Us:

Twitter: ⁠⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠⁠, or ⁠⁠⁠⁠⁠@glambert⁠⁠⁠⁠⁠
Threads: @glambertpod or @gebauerm66
Voicemail: 713-487-7821 Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠

⁠⁠TranscriptContinue Reading Deploying Cutting-Edge Legal AI: Travers Smith’s Cautious, But Open-source Approach. (TGIR Ep. 216)

In this episode of The Geek in Review, hosts Marlene Gebauer and Greg Lambert interview Laura Leopard, founder and CEO of Leopard Solutions, about succession planning challenges facing law firms. Leopard explains that many firms have partners nearing retirement age but no concrete plans for transitioning clients and leadership. This lack of succession planning threatens law firms’ futures.

Laura mentions that to make matters worse, the path to equity partnership is getting longer, making it harder to retain promising senior associates and counsel. Firms have added non-equity partner roles, keeping equity partner numbers small to inflate profits per partner. Leadership lacks incentives to retire, with no retirement plans or continued compensation. All this will hamper recruiting efforts, as younger generations prioritize work-life balance.

She recommends that in order to retain mid-career attorneys, firms must rethink policies on remote work, billable hours, and flexibility. Virtual firms with better lifestyle offerings are growing competitors. But firms seem unwilling to change. Leopard argues everything should be on the table for analysis by outside consultants. Phased retirements and succession mentoring could also help transition clients and power.

Though Laura Leopard (and even Bruce MacEwan) cannot point to examples of firms that have executed succession planning well, it is possible with courageous leadership. She advises setting retirement age limits, crafting written plans, and easing older partners’ exits. A too-big-to-fail mentality persists despite serious business vulnerabilities if talent is not retained and recruited.

Looking ahead, Leopard predicts the rise of virtual firms will shake up the legal industry as they encroach on Big Law territory with alternative fee arrangements. The pandemic accelerated dissatisfaction with law firm partnership and policies. As generational divides grow, flexible virtual firms will keep gaining ground over more rigid large firms.

This engaging discussion unpacks the complex dynamics around law firm succession planning and existential threats posed by lack of preparation. As partners cling to power, can bold leaders emerge to implement creative solutions and secure these institutions’ longevity? Tune in for an insightful examination of forces reshaping the legal landscape.

Links:

Listen on mobile platforms:  ⁠Apple Podcasts⁠ |  ⁠Spotify⁠ | YouTube (NEW!)

Contact Us:

Twitter: ⁠⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠⁠, or ⁠⁠⁠⁠⁠@glambert⁠⁠⁠⁠⁠
Threads: @glambertpod or @gebauerm66
Voicemail: 713-487-7821 Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠

⁠⁠Transcript

Continue Reading Laura Leopard on Law Firms’ Current Succession Planning: Step One – Do Nothing (TGIR Ep. 215)

The Geek in Review podcast hosts Marlene Gebauer and Greg Lambert interviewed Nicole Clark, CEO of Trellis, about their new Law Firm Intelligence tool (LFI). This tool allows law firms to analyze aggregated and normalized state trial court data to gain competitive intelligence across cases, practice areas, and performance. Collecting this unstructured data from county courts is very challenging, but provides valuable business insights.

The Law Firm Intelligence tool enables firms to identify growth opportunities, benchmark themselves, and drill down into the data to find strategic insights. Firms can slice and dice the data by region, practice area, time period, and other parameters to get to the most relevant information. LFI also gives litigators specific insights into judges, opposing counsel tactics, and case outcomes.

Trellis uses both technology and human QA processes to ensure the accuracy of the raw trial court data. The data comes directly from the courts, without any alterations by Trellis. This allows Trellis to spot trends like hotspots for certain case types, which can inform law firm strategy and policy implications.

As a newer legal tech company, Trellis initially had to overcome skepticism and get large firms to try their product. But steady growth has now built their credibility. Nicole Clark discussed the challenges of selling into the legal industry as a startup.

Trellis has exciting new AI capabilities in development that will leverage the trove of state court data they have aggregated. While widespread adoption of AI in legal is coming, though the timeline is uncertain. Clark predicts more law firm consolidation and AI startups, but cautions against overestimating what legal tasks AI can solve.

Links Mention:

Listen on mobile platforms:  Apple Podcasts |  Spotify

Contact Us:

Twitter: ⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠, or ⁠⁠⁠⁠@glambert⁠⁠⁠⁠
Threads: @glambertpod or @gebauerm66
Voicemail: 713-487-7821 Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠

⁠⁠Transcript

Continue Reading Trellis’ Nicole Clark on Leveraging State Court Data for Competitive Advantage (TGIR Ep. 214)

This is part 3 in a 3 part series.  Part 1 questions Goldman’s Sachs data showing that 44% of of legal tasks could be replaced by Generative AI.  In Part 2, we find some better data and estimate an upper limit of 23.5% of revenue that could be reduced by Generative AI. All of our assertions and assumptions will be discussed in further detail in a free LVN Webinar on August 15th.

The Big Idea:  We apply reductions in hours due to Generative AI to a few matters to determine Generative AI’s potential effect on profitability.
Key Points:
  • We establish a baseline sample matter and compare changes to that sample matter when Generative AI is applied
  • We explore how leverage is affected by Generative AI and how those changes may affect profitability in unexpected ways

Determining Generative AI’s effect on law firm profitability requires a bit more than a “back of the napkin” calculation with rough percentages based on keywords in time entries, as we did when roughly calculating the effect on revenue.

As Toby pointed out at the end of the last post, Generative AI is unlikely to hit all timekeepers equally.

We begin with this assertion.

Generative AI will disproportionately impact non-partner hours.

We are comfortable making this assertion for two reasons:

  1. Generative AI, in its current state, is most likely to replace or shorten the time to complete lower complexity and lesser specialized tasks that should be performed at the associate or paralegal level.
  2. Any time legal work hours are reduced, Partners tend to protect their own hours.

With that in mind, Toby began a profitability analysis, beginning with a baseline sample matter that does not factor in any use of Generative AI. We will use this baseline to compare against our AI adjusted matters.


Baseline M&A Sample Matter Data

Our baseline sample matter is loosely modeled on an M&A transaction and includes 5 timekeepers:

  • an Equity Partner
  • a 17th year service partner
  • 10th, 7th and 3rd year associates
TK Hours Rate Realization Revenue Expense Profit
EP 80 $1,000 88% $70,400 $15,200 $55,200
SP17 100 $895 88% $78,760 $48,500 $30,260
10yr 125 $735 88% $80,850 $48,750 $32,100
7yr 90 $660 88% $52,272 $32,400 $19,872
3yr 55 $595 88% $28,798 $18,150 $10,648

Estimated Annual Profit Per Equity Partner (PPEP) – $1,851 X 1400 hrs = $2,591,400

Leverage – 60% Non-Partner Hours


There are, of course, a number of assumptions in this baseline data that could greatly change from firm to firm, including the billable rates, the realization rate, and the expense for each timekeeper. However, we will keep this baseline data consistent across all of our examples in order to make a fair comparison. With different rates, realization, and expenses you will get different results. We strongly encourage every firm to perform a similar calculation for themselves.

Baseline Matter Analysis

The total hours billed are 450. The total revenue is $311k and the total profit in dollars is $163k.

Our model then translates the profit on this one matter into an estimated PPEP number for the firm. This is so we can determine profit margin impact separate from profit dollars.

In this baseline model, the PPEP number is ~$2.6m; meaning that if all work at this firm were staffed and billed like this one matter, the firm average PPEP would be about $2.6m.

Leverage

There’s an old adage in economic circles: “Workers Work. Owners Benefit.”
Continue Reading AI-Pocalypse: The Shocking Impact on Law Firm Profitability

by 3 Geeks (Ryan McClead, Greg Lambert, and Toby Brown)

This is part 2 in a 3 part series. The first part is here. Part 3 is here.

The Big Idea: We found a much better dataset, though still small, from which to extrapolate actual effects of Generative AI on the legal industry.

Key takeaways:

  • We got anonymized and summarized data for 10 corporate legal departments from LexisNexis CounselLink
  • The data showed that almost 40% of time entries, representing 47% of billings, could potentially use Generative AI.
  • We estimate that a realistic initial upper limit for Generative AI would be to reduce that work by half, or 20% of time entries and 23.5% of revenue

In the previous post, Ryan got tired of hearing the Goldman Sachs “44% of Legal is going away” stat being quoted uncritically and decided to actually look into the underlying data used in their report. Ryan’s exploration of the data is an interesting story in and of itself, but the bottom line is that the data is fuzzy at best, the sample size is laughable, and the breathlessly unquestioning reporting on Goldman’s study has been remarkably sloppy.

After writing up his findings, Ryan shared that post with Greg and Toby, and the question quickly arose, “can we find some actual, useful data to better understand the effect that Generative AI might actually have on law firms?” Gregreached out to Kris Satkunas from LexisNexis CounselLink, a recent interviewee on the Geek in Review, to see if CounselLink could share some anonymized benchmark data for us to analyze.

LexisNexis CounselLink Data

As a reminder the Goldman data was using survey questions about how important certain “work tasks” were for their jobs. Those tasks included things like “Getting Information”, “Identifying Objects, Actions, and Events”, and “Scheduling Work and Activities”. These are quite vague and wide open to interpretation.

In an attempt to find more useful data for our purposes, we asked Kris for the percentages of all time entries that included the keywords “Draft” or “Review” in the description. Our assumption is that those two terms will capture a large percentage of actual time entries in which lawyers are likely to use Generative AI. We fully recognize that this simple heuristic will not produce a clean data set from which to extrapolate definitive results, but as a first pass at some real data, we believe this gives us a nice estimate of tasks that could potentially be ripe for automation with Generative AI.
Continue Reading Generative AI Could Reduce Law Firm Revenue by 23.5%

This is the first in a 3-part blog post, it first appeared on The Sente Playbook.  The other 2 posts are co-authored by Toby Brown and Greg Lambert and will follow later this week. Apologies for the length of this post, but I was channeling my inner Casey Flaherty.
The Big Idea:  The data that Goldman used is insufficient to make the claims about Generative AI’s effect on legal that their report did.
Key Take-Aways:
  • Reporting about this report is sloppy
  • Reporting within this report is sloppy
  • The underlying data doesn’t tell us much meaningful
  • 3 Geeks attempts to find meaningful data
On March 26th, 2023 Goldman Sachs sent shockwaves through the legal industry by publishing a report claiming that 44% of “something” in the Legal Industry was going to be replaced by Generative AI.  I didn’t question that stat at the time, because it sounded about right to me.  I suspect that was true for most people who know the legal industry.  As I’ve heard this stat repeated by multiple AI purveyors actively scaring lawyers into buying their products or services, I eventually started to question its validity.
I started by looking into the press coverage of that 44% number and was immediately confused.  (All emphasis below added by me.)

Law.com  – March 29, 2023
Generative AI Could Automate Almost Half of All Legal Tasks, Goldman Sachs Estimates
“Goldman Sachs estimated that generative AI could automate 44% of legal tasks in the U.S. “

Observer – March 30, 2023
Two-Thirds of Jobs Are at Risk: Goldman Sachs A.I. Study
“The investment bank’s economists estimate that 46% of administrative positions, 44% of legal positions, and 37% of engineering jobs could be replaced by artificial intelligence.

NY Times – April 10, 2023
A.I. Is Coming for Lawyers, Again
“Another research report, by economists at Goldman Sachs, estimated that 44 percent of legal work could be automated.”

Okay, so which is it?  Generative AI is going to replace 44% of legal tasks, positions, or work?
Because those are 3 very different things; each of which would have extremely different impacts on the industry if they came to pass.  Lest you think I cherry-picked three outlying articles, go ahead and Google “AI Replace 44% Legal Goldman Sachs” and see what you get.  Those 3 articles are in my top 5 results.
My top result as of this writing is a news article from IBL News, writing last Tuesday that Goldman says,  “AI could automate 46% of tasks in administrative jobs, 44% of legal jobs, and 37% of architecture and engineering professions.”
We should probably just go back to what the Goldman Sachs report actually said and then we can chalk this up to lazy tech journalism.  Well, not so fast.  Because while the Goldman researchers clearly say “current work tasks” (see below) even that begins to fall apart once you dig into the underlying data.

What Goldman Sachs actually said in the report

Continue Reading 44% of Investment Bankers Think They Can Make Lots of Money Off of Attorney Insecurity (AI)

Tony Thai and Ashley Carlisle of HyperDraft, return to The Geek in Review podcast to provide an update on the state of generative AI in the legal industry. It has been 6 months since their last appearance, when the AI Hype Cycle was on the rise. We wanted to get them back on the show to see where we are on that hype cycle at the moment.

While hype around tools like ChatGPT has started to level off, Tony and Ashley note there is still a lot of misinformation and unrealistic expectations about what this technology can currently achieve. Over the past few months, HyperDraft has received an influx of requests from law firms and legal departments for education and consulting on how to practically apply AI like large language models. Many organizations feel pressure from management to “do something” with AI, but lack a clear understanding of the concrete problems they aim to solve. This results in a solution in search of a problem situation.

Tony and Ashley provide several key lessons learned regarding limitations of generative AI. It is not a magic bullet or panacea – you still have to put in the work to standardize processes before automating them. The technology excels at research, data extraction and summarization, but struggles to create final, high-quality legal work product. If the issue being addressed is about standardizing processes or topics, then having the ability to create 50 different ways to answer the issue doesn’t create standards, it creates chaos.

Current useful applications center on legal research, brainstorming, administrative tasks – not mission-critical legal analysis. The hype around generative AI could dampen innovation in process automation using robotic process automation and expert systems. Casetext’s acquisition by Thomson Reuters illustrates the present-day limitations of large language models trained primarily on case law.

Looking to the near future, Tony and Ashley predict the AI hype cycle will continue to fizzle out as focus shifts to education and literacy around all forms of AI. More legal tech products will likely combine specialized AI tools with large language models. And law firms may finally move towards flat rate billing models in order to meet client expectations around efficiency gains from AI.

Listen on mobile platforms:  Apple Podcasts |  Spotify

Contact Us:

Twitter: ⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠, or ⁠⁠⁠⁠@glambert⁠⁠⁠⁠
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠

⁠⁠TranscriptContinue Reading You Still Need to Put in the Work: Hyperdraft’s Ashley Carlisle and Tony Thai on the AI Hype Cycle (TGIR Ep. 213)

The Geek in Review podcast welcomed Kriti Sharma, Chief Product Officer of Legal Tech at Thomson Reuters, to discuss AI and ethics in the legal industry. Kriti talks to us about the importance of diversity at Thomson Reuters and how it impacts product development. She explained TR’s approach to developing AI focused on augmenting human skills rather than full automation. Kriti also discusses the need for more regulation around AI and the shift towards human skills as AI takes on more technical work.

A major theme was the responsible development and adoption of AI tools like ChatGPT. She discusses the risks of bias but shared TR’s commitment to building trusted and ethical AI grounded in proven legal content. Through this “grounding” of the information, the AI produces reliable answers lawyers can confidently use and reduce the hallucinations that are prevalent in publicly commercial Gen AI tools.

Kriti shares her passion for ensuring people from diverse backgrounds help advance AI in law. She argues representation is critical in who develops the tech and what data trains it to reduce bias. Kriti explains that diversity of experiences and knowledge amongst AI creators is key to building inclusive products that serve everyone’s needs. She emphasizes Thomsons Reuters’ diversity across leadership, which informs development of thoughtful AI. Kriti states that as AI learns from its creators and data like humans do, we must be intentional about diverse participation. Having broad involvement in shaping AI will lead to technology that is ethical and avoids propagating systemic biases. Kriti makes a compelling case that inclusive AI creation is imperative for both building trust and realizing the full potential of the technology to help underserved communities.

Kriti Sharma highlights the potential for AI to help solve major societal challenges through her non-profit AI for Good. For example, democratizing access to helpful legal and mental health information. She spoke about how big companies like TR can turn this potential into actual services benefiting underserved groups. Kriti advocated for collaboration between industry, government and civil society to develop beneficial applications of AI.

Kriti founded the non-profit AI for Good to harness the power of artificial intelligence to help solve pressing societal challenges. Through AI for Good, Kriti has led the development of AI applications focused on expanding access to justice, mental healthcare, and support services for vulnerable groups. For example, the organization created the chatbot tool rAInbow to provide information and resources to those experiencing domestic violence. By partnering frontline organizations with technologists, AI for Good aims to democratize access to helpful services and trusted information. Kriti sees huge potential for carefully constructed AI to have real positive impact in areas like legal services for underserved communities.

Looking ahead, Kriti says coordinated AI regulations are needed globally. She calls for policymakers, companies and society to work together to establish frameworks that enable adoption while addressing risks. With the right balance, AI can transform legal services for the better.

Links:

Listen on mobile platforms:  Apple Podcasts |  Spotify

Contact Us:

Twitter: ⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠, or ⁠⁠⁠⁠@glambert⁠⁠⁠⁠
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠

⁠⁠Transcript

Continue Reading Thomson Reuters’ Kriti Sharma on Responsible AI: The Path to Trusted Tech in Law