by 3 Geeks (Ryan McClead, Greg Lambert, and Toby Brown)

This is part 2 in a 3 part series. The first part is here. Part 3 is here.

The Big Idea: We found a much better dataset, though still small, from which to extrapolate actual effects of Generative AI on the legal industry.

Key takeaways:

  • We got anonymized and summarized data for 10 corporate legal departments from LexisNexis CounselLink
  • The data showed that almost 40% of time entries, representing 47% of billings, could potentially use Generative AI.
  • We estimate that a realistic initial upper limit for Generative AI would be to reduce that work by half, or 20% of time entries and 23.5% of revenue

In the previous post, Ryan got tired of hearing the Goldman Sachs “44% of Legal is going away” stat being quoted uncritically and decided to actually look into the underlying data used in their report. Ryan’s exploration of the data is an interesting story in and of itself, but the bottom line is that the data is fuzzy at best, the sample size is laughable, and the breathlessly unquestioning reporting on Goldman’s study has been remarkably sloppy.

After writing up his findings, Ryan shared that post with Greg and Toby, and the question quickly arose, “can we find some actual, useful data to better understand the effect that Generative AI might actually have on law firms?” Gregreached out to Kris Satkunas from LexisNexis CounselLink, a recent interviewee on the Geek in Review, to see if CounselLink could share some anonymized benchmark data for us to analyze.

LexisNexis CounselLink Data

As a reminder the Goldman data was using survey questions about how important certain “work tasks” were for their jobs. Those tasks included things like “Getting Information”, “Identifying Objects, Actions, and Events”, and “Scheduling Work and Activities”. These are quite vague and wide open to interpretation.

In an attempt to find more useful data for our purposes, we asked Kris for the percentages of all time entries that included the keywords “Draft” or “Review” in the description. Our assumption is that those two terms will capture a large percentage of actual time entries in which lawyers are likely to use Generative AI. We fully recognize that this simple heuristic will not produce a clean data set from which to extrapolate definitive results, but as a first pass at some real data, we believe this gives us a nice estimate of tasks that could potentially be ripe for automation with Generative AI. Continue Reading Generative AI Could Reduce Law Firm Revenue by 23.5%

This is the first in a 3-part blog post, it first appeared on The Sente Playbook.  The other 2 posts are co-authored by Toby Brown and Greg Lambert and will follow later this week. Apologies for the length of this post, but I was channeling my inner Casey Flaherty.
The Big Idea:  The data that Goldman used is insufficient to make the claims about Generative AI’s effect on legal that their report did.
Key Take-Aways:
  • Reporting about this report is sloppy
  • Reporting within this report is sloppy
  • The underlying data doesn’t tell us much meaningful
  • 3 Geeks attempts to find meaningful data
On March 26th, 2023 Goldman Sachs sent shockwaves through the legal industry by publishing a report claiming that 44% of “something” in the Legal Industry was going to be replaced by Generative AI.  I didn’t question that stat at the time, because it sounded about right to me.  I suspect that was true for most people who know the legal industry.  As I’ve heard this stat repeated by multiple AI purveyors actively scaring lawyers into buying their products or services, I eventually started to question its validity.
I started by looking into the press coverage of that 44% number and was immediately confused.  (All emphasis below added by me.)

Law.com  – March 29, 2023
Generative AI Could Automate Almost Half of All Legal Tasks, Goldman Sachs Estimates
“Goldman Sachs estimated that generative AI could automate 44% of legal tasks in the U.S. “

Observer – March 30, 2023
Two-Thirds of Jobs Are at Risk: Goldman Sachs A.I. Study
“The investment bank’s economists estimate that 46% of administrative positions, 44% of legal positions, and 37% of engineering jobs could be replaced by artificial intelligence.

NY Times – April 10, 2023
A.I. Is Coming for Lawyers, Again
“Another research report, by economists at Goldman Sachs, estimated that 44 percent of legal work could be automated.”

Okay, so which is it?  Generative AI is going to replace 44% of legal tasks, positions, or work?
Because those are 3 very different things; each of which would have extremely different impacts on the industry if they came to pass.  Lest you think I cherry-picked three outlying articles, go ahead and Google “AI Replace 44% Legal Goldman Sachs” and see what you get.  Those 3 articles are in my top 5 results.
My top result as of this writing is a news article from IBL News, writing last Tuesday that Goldman says,  “AI could automate 46% of tasks in administrative jobs, 44% of legal jobs, and 37% of architecture and engineering professions.”
We should probably just go back to what the Goldman Sachs report actually said and then we can chalk this up to lazy tech journalism.  Well, not so fast.  Because while the Goldman researchers clearly say “current work tasks” (see below) even that begins to fall apart once you dig into the underlying data.

What Goldman Sachs actually said in the report

Continue Reading 44% of Investment Bankers Think They Can Make Lots of Money Off of Attorney Insecurity (AI)

Tony Thai and Ashley Carlisle of HyperDraft, return to The Geek in Review podcast to provide an update on the state of generative AI in the legal industry. It has been 6 months since their last appearance, when the AI Hype Cycle was on the rise. We wanted to get them back on the show to see where we are on that hype cycle at the moment.

While hype around tools like ChatGPT has started to level off, Tony and Ashley note there is still a lot of misinformation and unrealistic expectations about what this technology can currently achieve. Over the past few months, HyperDraft has received an influx of requests from law firms and legal departments for education and consulting on how to practically apply AI like large language models. Many organizations feel pressure from management to “do something” with AI, but lack a clear understanding of the concrete problems they aim to solve. This results in a solution in search of a problem situation.

Tony and Ashley provide several key lessons learned regarding limitations of generative AI. It is not a magic bullet or panacea – you still have to put in the work to standardize processes before automating them. The technology excels at research, data extraction and summarization, but struggles to create final, high-quality legal work product. If the issue being addressed is about standardizing processes or topics, then having the ability to create 50 different ways to answer the issue doesn’t create standards, it creates chaos.

Current useful applications center on legal research, brainstorming, administrative tasks – not mission-critical legal analysis. The hype around generative AI could dampen innovation in process automation using robotic process automation and expert systems. Casetext’s acquisition by Thomson Reuters illustrates the present-day limitations of large language models trained primarily on case law.

Looking to the near future, Tony and Ashley predict the AI hype cycle will continue to fizzle out as focus shifts to education and literacy around all forms of AI. More legal tech products will likely combine specialized AI tools with large language models. And law firms may finally move towards flat rate billing models in order to meet client expectations around efficiency gains from AI.

Listen on mobile platforms:  Apple Podcasts |  Spotify

Contact Us:

Twitter: ⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠, or ⁠⁠⁠⁠@glambert⁠⁠⁠⁠
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠

⁠⁠Transcript

Continue Reading You Still Need to Put in the Work: Hyperdraft’s Ashley Carlisle and Tony Thai on the AI Hype Cycle (TGIR Ep. 213)

The Geek in Review podcast welcomed Kriti Sharma, Chief Product Officer of Legal Tech at Thomson Reuters, to discuss AI and ethics in the legal industry. Kriti talks to us about the importance of diversity at Thomson Reuters and how it impacts product development. She explained TR’s approach to developing AI focused on augmenting human skills rather than full automation. Kriti also discusses the need for more regulation around AI and the shift towards human skills as AI takes on more technical work.

A major theme was the responsible development and adoption of AI tools like ChatGPT. She discusses the risks of bias but shared TR’s commitment to building trusted and ethical AI grounded in proven legal content. Through this “grounding” of the information, the AI produces reliable answers lawyers can confidently use and reduce the hallucinations that are prevalent in publicly commercial Gen AI tools.

Kriti shares her passion for ensuring people from diverse backgrounds help advance AI in law. She argues representation is critical in who develops the tech and what data trains it to reduce bias. Kriti explains that diversity of experiences and knowledge amongst AI creators is key to building inclusive products that serve everyone’s needs. She emphasizes Thomsons Reuters’ diversity across leadership, which informs development of thoughtful AI. Kriti states that as AI learns from its creators and data like humans do, we must be intentional about diverse participation. Having broad involvement in shaping AI will lead to technology that is ethical and avoids propagating systemic biases. Kriti makes a compelling case that inclusive AI creation is imperative for both building trust and realizing the full potential of the technology to help underserved communities.

Kriti Sharma highlights the potential for AI to help solve major societal challenges through her non-profit AI for Good. For example, democratizing access to helpful legal and mental health information. She spoke about how big companies like TR can turn this potential into actual services benefiting underserved groups. Kriti advocated for collaboration between industry, government and civil society to develop beneficial applications of AI.

Kriti founded the non-profit AI for Good to harness the power of artificial intelligence to help solve pressing societal challenges. Through AI for Good, Kriti has led the development of AI applications focused on expanding access to justice, mental healthcare, and support services for vulnerable groups. For example, the organization created the chatbot tool rAInbow to provide information and resources to those experiencing domestic violence. By partnering frontline organizations with technologists, AI for Good aims to democratize access to helpful services and trusted information. Kriti sees huge potential for carefully constructed AI to have real positive impact in areas like legal services for underserved communities.

Looking ahead, Kriti says coordinated AI regulations are needed globally. She calls for policymakers, companies and society to work together to establish frameworks that enable adoption while addressing risks. With the right balance, AI can transform legal services for the better.

Links:

Listen on mobile platforms:  Apple Podcasts |  Spotify

Contact Us:

Twitter: ⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠, or ⁠⁠⁠⁠@glambert⁠⁠⁠⁠
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠

⁠⁠Transcript

Continue Reading Thomson Reuters’ Kriti Sharma on Responsible AI: The Path to Trusted Tech in Law

In this episode of The Geek in Review podcast, host Marlene Gebauer and co-host Greg Lambert discuss cybersecurity challenges with guests Jordan Ellington, founder of SessionGuardian, Oren Leib, Vice President of Growth and Partnership at SessionGuardian, and Trisha Sircar, partner and chief privacy officer at Katten Muchin Rosenman LLP.

Ellington explains that the impetus for creating SessionGuardian came from working with a law firm to secure their work with eDiscovery vendors and contract attorney staffing agencies. The goal was to standardize security practices across vendors. Ellington realized the technology could provide secure access to sensitive information from anywhere. SessionGuardian uses facial recognition to verify a user’s identity remotely.

Leib discusses some alarming cybersecurity statistics, including a 7% weekly increase in global cyber attacks and the fact that law firms and insurance companies face over 1,200 attacks per week on average. Leib notes SessionGuardian’s solution addresses risks beyond eDiscovery and source code review, including data breach response, M&A due diligence, and outsourced call centers. Recently, a major North American bank told Leib that 10 of their last breach incidents were caused by unauthorized photography of sensitive data.

Sircar says law firms’ top challenges are employee issues, data retention problems, physical security risks, and insider threats. Regulations address real-world issues but can be difficult for global firms to navigate. Certifications show a firm’s commitment to security but continuous monitoring and updating of practices is key. When negotiating with vendors, Sircar recommends considering cyber liability insurance, audit rights, data breach responsibility, and limitations of liability.

Looking ahead, Sircar sees employee education as an ongoing priority, along with the ethical use of AI. Ellington expects AI will be used for increasingly sophisticated phishing and impersonation attacks, requiring better verification of individuals’ identities. Leib says attorneys must take responsibility for cyber defenses, not just rely on engineers. He announces SessionGuardian will offer free CLE courses on cybersecurity awareness and compliance.

The episode highlights how employee errors and AI threats are intensifying even as remote and hybrid work become standard. Firms should look beyond check-the-box compliance to make privacy and security central in their culture. Technology like facial recognition and continuous monitoring helps address risks, but people of all roles must develop competence and vigilance. Overall, keeping client data secure requires an integrated and ever-evolving approach across departments and service providers. Strong terms in vendor agreements and verifying partners’ practices are also key.

Listen on mobile platforms:  Apple Podcasts |  Spotify

Contact Us:

Twitter: ⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠, or ⁠⁠⁠⁠@glambert⁠⁠⁠⁠
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠

⁠⁠Transcript

Continue Reading Cybersecurity in the Remote Work Era: AI, Employees and an Integrated Defense – With SessionGuardian’s Jordan Ellington and Oren Leib, and Katten’s Trisha Sircar (TGIR Ep. 211)

For the Fourth of July week, we thought we’d do something fun and probably a little weird. Greg spoke with an AI guest named Justis for this episode. Justis, powered by OpenAI’s GPT-4, was able to have a natural conversation with Greg and provide insightful perspectives on the use of generative AI in the legal industry, specifically in law firms.

In the first part of their discussion, Justis gave an overview of the legal industry’s interest in and uncertainty around adopting generative AI. While many law firm leaders recognize its potential, some are unsure of how it fits into legal work or worry about risks. Justis pointed to examples of firms exploring AI and said letting lawyers experiment with the tools could help identify use cases.

Greg and Justis then discussed the challenges for the legal industry in using AI, like knowledge gaps, data issues, technology maturity, and managing change. They also talked about the upsides of using AI for tasks such as research, drafting, and review, including efficiency and cost benefits, as well as downsides like over-reliance on AI and ethical concerns.

The conversation turned to how AI could streamline law firm operations, with opportunities around scheduling, paperwork, billing, client insights, and more. However, Justis noted that human oversight is still critical. Justis and Greg also discussed how AI may impact legal jobs, creating demand for new skills and roles but aiming to augment human work rather than replace it.

Finally, Justis suggested innovations law firms could build with AI like research and drafting tools, analytics, dispute resolution systems, and project management. Justis emphasized that focusing on user needs, ethics, and change management will be key for successfully implementing AI. Looking ahead, Justis anticipated continuing progress in legal AI, regulatory changes, a focus on ethics, growing demand for AI skills, and AI becoming a competitive advantage for some firms.

While this was a “unique” episode for The Geek in Review, we hope it provided an insightful “conversation” about the current and future state of generative AI in the legal industry. There is significant promise but there are also challenges around managing change, addressing risks, and ensuring the responsible development of new AI tools. With the right focus and approach, law firms can start exploring ways to make the most of AI and gain a competitive edge. But they must make AI work for human professionals, not the other way around.

Listen on mobile platforms:  Apple Podcasts |  Spotify

Contact Us:

Twitter: ⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠, or ⁠⁠⁠⁠@glambert⁠⁠⁠⁠
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠

⁠⁠Transcript

 

Continue Reading A Literal Generative AI Discussion: How AI Could Reshape Law

Isha Marathe, a tech reporter for American Lawyer Media, joined the podcast to discuss her recent article on how deep fake technology is coming to litigation and whether the legal system is prepared. Deep fakes are hyper-realistic images, videos or audio created using artificial intelligence to manipulate or generate fake content. They are easy and inexpensive to create but difficult to detect. Marathe believes deep fakes have the potential to severely impact the integrity of evidence and the trial process if the legal system is unprepared.

E-discovery professionals are on the front lines of detecting deep fakes used as evidence, according to Marathe. However, they currently only have limited tools and methods to authenticate digital evidence and determine if it is real or AI-generated. Marathe argues judges and lawyers also need to be heavily educated on the latest developments in deep fake technology in order to counter their use in court. Regulations, laws and advanced detection technology are still lacking but urgently needed.

Marathe predicts that in the next two to five years, deep fakes will significantly start to affect litigation and pose risks to the judicial process if key players are unprepared. States will likely pass a patchwork of laws to regulate AI-generated images. Sophisticated detection software will emerge but will not be equally available in all courts, raising issues of equity and access to justice.

The two recent cases where parties claimed evidence as deep fakes highlight the issues at stake but did not dramatically alter the trial outcomes. However, as deep fake technology continues to rapidly advance, it may soon be weaponized to generate highly compelling and persuasive fake evidence that could dupe both legal professionals and jurors. Once seen, such imagery can be hard to ignore, even if proven to be false or AI-generated.

Marathe argues that addressing and adapting to the rise of deep fakes will require a multi-pronged solution: education, technology tools, regulations and policy changes. But progress on all fronts is slow while threats escalate quickly. Deep fakes pose an alarm for legal professionals and the public, dragging the legal system as a whole into an era of “post-truth.” Trust in the integrity of evidence and trial outcomes could be at stake. Overall, it was an informative if sobering discussion on the state of the legal system’s preparedness for inevitable collisions with deep fake technology.

Links

Listen on mobile platforms:  Apple Podcasts |  Spotify

Contact Us:

Twitter: ⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠, or ⁠⁠⁠⁠@glambert⁠⁠⁠⁠
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠

⁠⁠Transcript

Continue Reading The Rise of “Post-Truth” Litigation: ALM’s Isha Marathe on How Deep Fakes Threaten the Legal System (TGIR Ep. 209)

Our guest this week is Kristina Satkunas, Director of Analytic Consulting at LexisNexis. Kristina discusses the recently released LexisNexis CounselLink Enterprise Legal Management Trends Report for 2023. This annual report provides insights and benchmarks on key metrics related to corporate legal spending and outside counsel relationships.

The 2023 report found that law firm hourly rates increased 4.5% over the past year, the highest year-over-year increase in the 10 years LexisNexis has published the report. While rate increases are not surprising, the magnitude is noteworthy. Kris attributes the largest drivers of the increase to economic factors like inflation as well as lower demand for certain types of legal work. However, average blended rates (the rates charged for entire matters rather than individual timekeepers) remained relatively flat. This suggests in-house counsel are mitigating rate hikes by changing the mix of firms, timekeepers, and types of timekeepers working their matters.

The report also found the ongoing trend of consolidation to fewer outside firms continues, with 61% of companies using 10 or fewer firms for 80% of their legal spending. Kristina expects this trend to remain relatively stable but notes there are benefits to using both a smaller number of firms (e.g. better rates, stronger relationships) and a larger number of firms (e.g. subject matter expertise, competitive rates). She recommends companies determine when to use large firms versus smaller or midsize firms based on factors like matter complexity, risk profile, and cost.

Alternative fee arrangements (AFAs) have not gained significant traction according to the report, remaining at about 12% of matters. Kristina is an advocate for wider AFA adoption and believes companies need to ask for and consider AFA proposals, especially for appropriate matters. AFAs can help buffer rising hourly rates. She acknowledges AFAs require effort to evaluate and implement but thinks legal operations teams and outside counsel should work together using data and analytics to develop reasonable AFA proposals.

The report provides new data on international lawyer rates in 22 countries. Rates differ significantly between countries based on factors like a country’s economy, political stability, and role in global trade and commerce. Many companies are leveraging international firms for regulatory, litigation, IP, and other legal needs outside the U.S. Benchmark data on rates in different countries provides helpful context, especially when engaging firms in new countries.

Kristina sees two significant changes on the horizon:

  1. Determining how to properly and effectively employ AI and technology to increase efficiency and reduce costs; and
  2. Continued access to data enabling both in-house and outside counsel to make smarter, data-driven decisions.

When asked what metric in-house and outside counsel should focus on, Kristina recommends using available data, whether from the survey or a company’s own systems. Data is a “two-way street” that should be shared collaboratively to improve decision making.

Links

Listen on mobile platforms:  Apple Podcasts |  Spotify

Contact Us:

Twitter: ⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠, or ⁠⁠⁠⁠@glambert⁠⁠⁠⁠ Voicemail: 713-487-7821 Email: geekinreviewpodcast@gmail.com Music: ⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠

⁠⁠Transcript

Continue Reading The Rising Cost of Legal Services: Insights from 10 Years of Data from CounselLink’s Kristina Satkunas

This week on The Geek in Review, Marlene Gebauer and Greg Lambert talk with Curt Meltzer, principal of Meltzer Consulting, LLC. Meltzer has over 40 years of experience in the legal and legal tech industry. He discusses his interest in pro bono and community outreach programs in law firms and legal tech companies. He notes that while 95% of AmLaw 200 law firms highlight pro bono work on their websites, many legal tech companies do not prioritize these efforts.

Meltzer emphasizes that pro bono and community work is good for business. It enhances company culture, helps with recruiting and retaining top talent, and strengthens customer relationships. He argues that legal tech companies should consider emulating their law firm clients’ community programs. This could include donating software or services, allowing employees paid time off for volunteer work, or collaborating directly with organizations that law firm clients support.

Meltzer highlights LexisNexis and Thomson Reuters as leaders in the legal tech industry for their work promoting access to justice and the rule of law around the world. However, he notes that companies of any size can contribute, whether through recognizing employees who volunteer or donating resources. He published a list of 41 legal tech companies that do highlight community outreach on their websites to raise awareness, though he found 39 companies with no mention of such efforts.

Meltzer sees both opportunities and challenges ahead. Private equity investment in legal tech companies may prioritize short-term profits over community programs. However, companies that do not respond to customer interest in their pro bono and corporate social responsibility initiatives risk losing business to competitors. Overall, Meltzer aims to foster conversations about strengthening the relationship between the legal tech community and the broader community. Corporations that embrace ESG programs and give back to the communities they serve will thrive.

Listen on mobile platforms:  Apple Podcasts |  Spotify

Links:

Contact Us:

Twitter: ⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠, or ⁠⁠⁠⁠@glambert⁠⁠⁠⁠
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠

⁠Transcript⁠

Continue Reading Curt Meltzer on Why Legal Tech Companies Should Give Back: The Business Case for Pro Bono, A2J, and Community Outreach (TGIR Ep. 207)

This week we bring in Christian Lang, the CEO and founder of LEGA, a company that provides a secure platform for law firms and legal departments to safely implement and govern the use of large language models (LLMs) like Open AI’s GPT-4, Google’s Bard, and Anthropic’s Claude. Christian talks with us about why he started LEGA, the value LEGA provides to law firms and legal departments, the challenges around security, confidentiality, and other issues as LLMs become more widely used, and how LEGA helps solve those problems.

Christian started LEGA after gaining experience working with law firms through his previous company, Reynen Court. He saw an opportunity to give law firms a way to quickly implement and test LLMs while maintaining control and governance over data and compliance. LEGA provides a sandbox environment for law firms to explore different LLMs and AI tools to find use cases. The platform handles user management, policy enforcement, and auditing to give firms visibility into how the technologies are being used.

Christian believes law firms want to use technologies like LLMs but struggle with how to do so securely and in a compliant way. LEGA allows them to get started right away without a huge investment in time or money. The platform is also flexible enough to work with any model a firm wants to use. As law firms get comfortable, LEGA will allow them to scale successful use cases across the organization.

On the challenges law firms face, Christian points to Shadow IT as people will find ways to use the technologies with or without the firm’s permission. Firms need to provide good options to users or risk losing control and oversight. He also discusses the difficulty in training new lawyers as LLMs make some tasks too easy, the coming market efficiencies in legal services, and the strategic curation of knowledge that will still require human judgment.

Some potential use cases for law firms include live chatbots, document summarization, contract review, legal research, and market intelligence gathering. As models allow for more tailored data inputs, the use cases will expand further. Overall, Christian is excited for how LLMs and AI can transform the legal industry but emphasizes that strong governance and oversight are key to implementing them successfully.

Listen on mobile platforms:  Apple Podcasts |  Spotify

 

Contact Us:

Twitter: ⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠, or ⁠⁠⁠⁠@glambert⁠⁠⁠⁠
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠

⁠Transcript⁠

Continue Reading Christian Lang on Governing the Rise of LLMs: How LEGA Provides a Safe Space for Law Firms to Use AI (TGIR Ep. 206)