This week we are joined by Brandon Wiebe, General Counsel and Head of Privacy at Transcend. Brandon discusses the company’s mission to develop privacy and AI solutions. He outlines the evolution from manual governance to technical solutions integrated into data systems. Transcend saw a need for more technical privacy and AI governance as data processing advanced across organizations.

Wiebe provides examples of AI governance challenges, such as engineering teams using GitHub Copilot and sales/marketing teams using tools like Jasper. He created a lightweight AI Code of Conduct at Transcend to give guidance on responsible AI adoption. He believes technical enforcement like cataloging AI systems will also be key.

On ESG governance changes, Wiebe sees parallels to privacy regulation evolving from voluntary principles to specific technical requirements. He expects AI governance will follow a similar path but much faster, requiring legal teams to become technical experts. Engaging early and lightweight in development is key.

Transcend’s new Pathfinder tool provides observability into AI systems to enable governance. It acts as an intermediary layer between internal tools and foundation models like OpenAI. Pathfinder aims to provide oversight and auditability into these AI systems.

Looking ahead, Wiebe believes GCs must develop deep expertise in AI technology, either themselves or by building internal teams. Understanding the technology will allow counsels to provide practical and discrete advice as adoption accelerates. Technical literacy will be critical.

Listen on mobile platforms:  ⁠Apple Podcasts⁠ |  ⁠Spotify⁠ | YouTube (NEW!)

Contact Us:

Twitter: ⁠⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠⁠, or ⁠⁠⁠⁠⁠@glambert⁠⁠⁠⁠⁠
Threads: @glambertpod or @gebauerm66
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠

Transcript

Continue Reading Observing the Black Box: Transcend’s Brandon Wiebe’s Insights into Governing Emerging AI Systems (TGIR Ep. 218)

In this episode of The Geek in Review, hosts Greg Lambert and Marlene Gebauer interview three guests from UK law firm Travers Smith about their work on AI: Chief Technology Officer Oliver Bethell, Director of Legal Technology Shawn Curran, and AI Manager Sam Lansley. They discuss Travers Smith’s approach to testing and applying AI tools like generative models.

A key focus is finding ways to safely leverage AI while mitigating risks like copyright issues and hallucination. Travers Smith built an internal chatbot called YCNbot to experiment with generative AI through secure enterprise APIs. They are being cautious on the generative side but see more revolutionary impact from reasoning applications like analyzing documents.

Travers Smith has open sourced tools like YCNbot to spur responsible AI adoption. Collaboration with 273 Ventures helped build in multi-model support. The team is working on reducing dependence on manual prompting and increasing document analysis capabilities. They aim to be model-agnostic to hedge against reliance on a single vendor.

On model safety, Travers Smith emphasizes training data legitimacy, multi-model flexibility, and probing hallucination risks. They co-authored a paper on subtle errors in legal AI. Dedicated roles like prompt engineers are emerging to interface between law and technology. Travers Smith is exploring AI for tasks like contract review but not yet for work product.

When asked about the crystal ball for legal AI, the guests predicted the need for equitable distribution of benefits, growth in reasoning applications vs. generative ones, and movement toward more autonomous agents over manual prompting. Info providers may gain power over intermediaries applying their data.

This wide-ranging discussion provides an inside look at how one forward-thinking firm is advancing legal AI in a prudent and ethical manner. With an open source mindset, Travers Smith is exploring boundaries and sharing solutions to propel the responsible use of emerging technologies in law.

Links:

Listen on mobile platforms:  ⁠Apple Podcasts⁠ |  ⁠Spotify⁠ | YouTube (NEW!)

Contact Us:

Twitter: ⁠⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠⁠, or ⁠⁠⁠⁠⁠@glambert⁠⁠⁠⁠⁠
Threads: @glambertpod or @gebauerm66
Voicemail: 713-487-7821 Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠

⁠⁠TranscriptContinue Reading Deploying Cutting-Edge Legal AI: Travers Smith’s Cautious, But Open-source Approach. (TGIR Ep. 216)

In this episode of The Geek in Review podcast, host Marlene Gebauer and co-host Greg Lambert discuss cybersecurity challenges with guests Jordan Ellington, founder of SessionGuardian, Oren Leib, Vice President of Growth and Partnership at SessionGuardian, and Trisha Sircar, partner and chief privacy officer at Katten Muchin Rosenman LLP.

Ellington explains that the impetus for creating SessionGuardian came from working with a law firm to secure their work with eDiscovery vendors and contract attorney staffing agencies. The goal was to standardize security practices across vendors. Ellington realized the technology could provide secure access to sensitive information from anywhere. SessionGuardian uses facial recognition to verify a user’s identity remotely.

Leib discusses some alarming cybersecurity statistics, including a 7% weekly increase in global cyber attacks and the fact that law firms and insurance companies face over 1,200 attacks per week on average. Leib notes SessionGuardian’s solution addresses risks beyond eDiscovery and source code review, including data breach response, M&A due diligence, and outsourced call centers. Recently, a major North American bank told Leib that 10 of their last breach incidents were caused by unauthorized photography of sensitive data.

Sircar says law firms’ top challenges are employee issues, data retention problems, physical security risks, and insider threats. Regulations address real-world issues but can be difficult for global firms to navigate. Certifications show a firm’s commitment to security but continuous monitoring and updating of practices is key. When negotiating with vendors, Sircar recommends considering cyber liability insurance, audit rights, data breach responsibility, and limitations of liability.

Looking ahead, Sircar sees employee education as an ongoing priority, along with the ethical use of AI. Ellington expects AI will be used for increasingly sophisticated phishing and impersonation attacks, requiring better verification of individuals’ identities. Leib says attorneys must take responsibility for cyber defenses, not just rely on engineers. He announces SessionGuardian will offer free CLE courses on cybersecurity awareness and compliance.

The episode highlights how employee errors and AI threats are intensifying even as remote and hybrid work become standard. Firms should look beyond check-the-box compliance to make privacy and security central in their culture. Technology like facial recognition and continuous monitoring helps address risks, but people of all roles must develop competence and vigilance. Overall, keeping client data secure requires an integrated and ever-evolving approach across departments and service providers. Strong terms in vendor agreements and verifying partners’ practices are also key.

Listen on mobile platforms:  Apple Podcasts |  Spotify

Contact Us:

Twitter: ⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠, or ⁠⁠⁠⁠@glambert⁠⁠⁠⁠
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠

⁠⁠Transcript


Continue Reading Cybersecurity in the Remote Work Era: AI, Employees and an Integrated Defense – With SessionGuardian’s Jordan Ellington and Oren Leib, and Katten’s Trisha Sircar (TGIR Ep. 211)

This week we bring in Christian Lang, the CEO and founder of LEGA, a company that provides a secure platform for law firms and legal departments to safely implement and govern the use of large language models (LLMs) like Open AI’s GPT-4, Google’s Bard, and Anthropic’s Claude. Christian talks with us about why he started LEGA, the value LEGA provides to law firms and legal departments, the challenges around security, confidentiality, and other issues as LLMs become more widely used, and how LEGA helps solve those problems.

Christian started LEGA after gaining experience working with law firms through his previous company, Reynen Court. He saw an opportunity to give law firms a way to quickly implement and test LLMs while maintaining control and governance over data and compliance. LEGA provides a sandbox environment for law firms to explore different LLMs and AI tools to find use cases. The platform handles user management, policy enforcement, and auditing to give firms visibility into how the technologies are being used.

Christian believes law firms want to use technologies like LLMs but struggle with how to do so securely and in a compliant way. LEGA allows them to get started right away without a huge investment in time or money. The platform is also flexible enough to work with any model a firm wants to use. As law firms get comfortable, LEGA will allow them to scale successful use cases across the organization.

On the challenges law firms face, Christian points to Shadow IT as people will find ways to use the technologies with or without the firm’s permission. Firms need to provide good options to users or risk losing control and oversight. He also discusses the difficulty in training new lawyers as LLMs make some tasks too easy, the coming market efficiencies in legal services, and the strategic curation of knowledge that will still require human judgment.

Some potential use cases for law firms include live chatbots, document summarization, contract review, legal research, and market intelligence gathering. As models allow for more tailored data inputs, the use cases will expand further. Overall, Christian is excited for how LLMs and AI can transform the legal industry but emphasizes that strong governance and oversight are key to implementing them successfully.

Listen on mobile platforms:  Apple Podcasts |  Spotify
Contact Us:

Twitter: ⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠, or ⁠⁠⁠⁠@glambert⁠⁠⁠⁠
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠

⁠Transcript⁠

Continue Reading Christian Lang on Governing the Rise of LLMs: How LEGA Provides a Safe Space for Law Firms to Use AI (TGIR Ep. 206)

With the partial government shutdown approaching one month, Marlene and Greg attempt to make some sense of what this means for those of us who rely upon the information produced by the US Government. On this episode, we have an extended talk with Emily Feltren, Director of Government Relations at the American Association of Law Libraries (AALL) to uncover what’s working and what’s shutdown. While the federal courts are still functioning, they are running on borrowed time, and are scheduled to run out of funds on January 25th. The Pew Research Center has listed a number of data sources which are not being updated during the shutdown. The OMB also has a list of agency shuddered at this time, and assume that the libraries are also closed. If you’re hoping to submit a Freedom of Information Act (FOIA) request… good luck. Agencies my accept them, but they may not have anyone to process them. Basically, it’s a cluster-fudge right now in D.C.

Joel Lytle, Director of Information Security at Jackson Walker, talks with Greg about the issue of .gov sites which are unable to renew their security certificates during the shutdown. It may not be all that bad… for now. However, there are already reports that the shutdown of sites like donotcall.gov and identitytheft.gov are already having some effects on consumers.

Joel’s advice… trust but verify. If you have questions about the website, call your technology security team and have them take a look at it. This is their area of expertise, so reach out to them.

Apple Podcasts LogoApple PodcastsOvercast LogoOvercastSpotify LogoSpotify

Information Inspirations:

The law library world lost a legend this month with the passing of Eileen Searls. In addition to being an influencer in the law library world, she is also the aunt of Eve Searls, who along with Jerry David DiCicca, performs the music you hear on The Geek In Review.
Continue Reading Episode 24: What Does the Federal Government Shutdown Mean for Legal Information?

Why Did Etisalat Block Flickr
Image [cc] Za3tOoOr!

Nothing really irritates a researcher more than attempting to get to a website only to find that it has been blocked by your network software. In fact, many of you may find that social media sites are closed off at work because someone decided that you’ll spend your time uploading cat videos