This week we are joined by Brandon Wiebe, General Counsel and Head of Privacy at Transcend. Brandon discusses the company’s mission to develop privacy and AI solutions. He outlines the evolution from manual governance to technical solutions integrated into data systems. Transcend saw a need for more technical privacy and AI governance as data processing advanced across organizations.
Wiebe provides examples of AI governance challenges, such as engineering teams using GitHub Copilot and sales/marketing teams using tools like Jasper. He created a lightweight AI Code of Conduct at Transcend to give guidance on responsible AI adoption. He believes technical enforcement like cataloging AI systems will also be key.
On ESG governance changes, Wiebe sees parallels to privacy regulation evolving from voluntary principles to specific technical requirements. He expects AI governance will follow a similar path but much faster, requiring legal teams to become technical experts. Engaging early and lightweight in development is key.
Transcend’s new Pathfinder tool provides observability into AI systems to enable governance. It acts as an intermediary layer between internal tools and foundation models like OpenAI. Pathfinder aims to provide oversight and auditability into these AI systems.
Looking ahead, Wiebe believes GCs must develop deep expertise in AI technology, either themselves or by building internal teams. Understanding the technology will allow counsels to provide practical and discrete advice as adoption accelerates. Technical literacy will be critical.
Twitter: @gebauerm, or @glambert
Threads: @glambertpod or @gebauerm66
Music: Jerry David DeCicca
Marlene Gebauer 0:07
Welcome to The Geek in Review. The podcast focused on innovative and creative ideas in the legal profession. I’m Marlene Gebauer.
Greg Lambert 0:14
And I’m Greg Lambert. Well, Marlene, you’ve been after me for months years, I’m not sure how,
Marlene Gebauer 0:22
at least a year
Greg Lambert 0:23
To do some type of video interaction and have a have a YouTube channel, whatnot, so so that we could join the cool kids that do podcasting, but then also have a YouTube channel. So I think we finally figured out that we’re going to try something.
Marlene Gebauer 0:42
We’ve stumbled onto a solution? Yes,
Greg Lambert 0:44
we have we have. So I think it’s gonna be a combination of we’re going to make some of our archive available online, and then going forward, adding in video, a video component to the podcast as well. So just wanted to give the audience a heads up. And we’ll be announcing things as we know more, which right now we know very little.
Marlene Gebauer 1:08
So this, this is very exciting, because our our listeners may or may turn into viewers and so they’ll get to see and hear all of the great content that our guests provide.
Greg Lambert 1:23
Yeah. And they’ll they’ll figure out why it is I have a face for radio.
Marlene Gebauer 1:29
I just open that up right for you.
Greg Lambert 1:32
Thank you. You taught me the softballs. Oh, knock them out of the park. So this week, we are joined by Brandon Wiebe, who’s the general counsel, head of privacy at transcend Brandon, welcome to The Geek in Review.
Brandon Wiebe 1:51
Thank you, Greg. Thank you Marlene, really excited to be here.
Marlene Gebauer 1:55
So Brandon, would you mind giving us a little history of transcend and what you do to support the mission there?
Brandon Wiebe 2:01
Yeah, absolutely. So Transcend is a data privacy infrastructure company, by background. And the full mission of Transcend is to develop technology solutions to some of the really complex privacy challenges that companies have faced over the last 10 years. And, by way of example, historically, a lot of privacy organizations were tackling privacy challenges through more manual and organizational measures, rather than true technical solutions that integrated into like the data layer or the tech stack, that where data was actually being processed. And we saw a lot of data processing throughout organizations get increasingly more technical love, on the marketing side, and the sales side, and everywhere, but the legal and privacy side. And so our founders, Ben and Mike, several years ago, decided that a solution that was much more technical in nature, is where privacy and privacy governance was going. And so they founded Transcend. I came on board about a year ago, to lead the legal and privacy organization, which is unique at a company that is a privacy tech company, because it’s all sort of the traditional stuff that you would do, as the first in house lawyer at, you know, a fast growing startup. So there’s corporate and IP and employment and all of that, and our own internal privacy compliance as well. But I also have the unique opportunity to work really closely with our product team and our engineering team, and our sales organization, to help guide where the products are going. And to work really closely with our customers as they are adopting and implementing our products and tools. And, and one of the things we’re going to talk a lot about today, I think is sort of the next chapter and transcends history. And I and I think in a lot of privacy organization’s history, is the adoption of AI and how the mandate around privacy governance is becoming much broader, given a lot of unique, the unique, a lot of the unique risks that AI presents, and
Greg Lambert 4:31
I like how you phrased it as you had the unique opportunity, when really I read that as I’m doing all the work.
Marlene Gebauer 4:42
And I do I do have a follow up. Before we get into the AI. It’s like let’s, let’s take a little trip down memory lane. So you know, you had mentioned, you know, historically that things were done very manually. I mean, do you have a specific example that you could share in terms of Have something that was done very manually that sure that technology eventually helped out with?
Brandon Wiebe 5:06
Absolutely, yeah. And I’ll, I’ll use an example from my own history. So before I was at Transcend, I was leading the product and privacy counseling function at Segments, which was eventually acquired by Twilio. And I continued on in that role at Twilio. But when we were at Segments, we looked around at the technology available to do a data map, to get a sense of where all of the data in our systems resided. And at that point, the solutions that were out there were very manual and processed in manual in scope. And what that meant was, you had to go around and interview every data system owner in your company, and write down what data they said that they had in that system, why they were processing it. And from that, you would be able to build a data map, it would be out of date very quickly, because systems change very rapidly. And you had to trust that the person who was telling you this knew what they were talking about, and that they put in the time to actually evaluate that system and understood what data was in there. More modern privacy technology, like Transcend, attacks these problems from a technical level, so is actually going through systems and programmatically finding what systems may have data in them. And then digging even deeper and analyzing the contents in those systems, and programmatically building a real time data map for you. So that’s an example of where technology helps solve one of these issues. It was much more manual, you know, even five years ago,
Greg Lambert 6:48
yeah, I bet it was eye opening, when you would run this process on a company’s data to determine and, you know, be able to show them, well, you’ve got all of these other data, data repositories, databases, places that store information, and I bet many of those had been built for like one off projects, or they didn’t even know existed. So that’s definitely a, you know, the automation is definitely something that helps uncover, you know, the, the not just the good data that’s out there. But just the the mess of data that can be out there as well.
Brandon Wiebe 7:30
That’s exactly right. That’s exactly right. And privacy and legal are not the only teams interested in finding that to, you know, the security team, very interested, but also the engineering team and the hosting costs of holding on to data that you really don’t need. So it definitely has impact cross functionally as well.
Greg Lambert 7:49
So, you know, we wanted to jump into the AI conversation as well. What types of unique data privacy and AI ethics challenges? Have you encountered as the General Counsel there at Transcend? And, and how do you navigate that to reduce the risk?
Brandon Wiebe 8:09
It’s a great question. So I’ll talk a little bit about some of the internal challenges that we’ve encountered at Transcend, and some of the solutions we put into place. But we’ll also spend a little bit of time talking about some of the solutions that we’re building right now that we hope other folks can implement, as well, as they build out their own AI governance policies internally. So I think like a lot of really fast moving startups, Transcend is really eager to adopt AI, as quickly as possible, and as responsibly as possible. And one of the challenges that I’ve seen is that there are a lot of tools that are out there, that are subtly adding in new AI functionality to a tool that we’re already using, that we already have a contract with. And it can be very hard to identify and vet those, but they do present some unique risks that did not previously exist in the way that we were using the tool in the past, I think so. No, sorry. Go ahead.
Marlene Gebauer 9:21
No, you go ahead. Because say it was it’s not even subtly, like, I mean, everybody’s announcing it, like, Oh, we’re gonna be having generative AI as part of our, you know, existing projects or products. And so yeah, it’s it’s it’s big news.
Brandon Wiebe 9:34
That’s right. And, and I think the volume of tools and the pervasiveness of how these features get added, is very hard for legal teams sometimes to get their hands around or to keep track of. So you know, this, this came up in the coding area, where our engineering team started using GitHub co pilot and thinking through the implications of using something like Copilot to generate code where that code resides. That’s a complex IP problem to think through. And if the legal team is not aware that that’s even happening to begin with, that can be a big challenge. There are also, you know, similar risks presented by net new vendors that have an AI functionality to them. So we use Jasper, for example, on our sales and marketing side to generate some content, and analyzing what we can do with that, how we can use that, the IP rights in that. Making sure we’re vetting it for marketing claims, and being thoughtful about that entire process is another new and unique challenge, presented by AI. So when we began to see some of these new features and new tools emerge, I wanted to take a step back and think about, well, practically, what is the impact of adding these new AI, this new AI functionality to our technology stack? And is there a way that we can more thoughtfully or more programmatically analyze these issues, without slowing down innovation without freaking out the teams that are trying to adopt these. And what we started with was, I think what a lot of organizations are doing, which is a code of conduct around generative AI, that laid out at a high level, what some of the big risks to the organization and to our customers, or to data subjects, could be from using generative AI, and some guidelines, some roadmap, markers that folks can use to understand when it’s okay to use these tools, what use cases are always acceptable, what use cases might present some risk, and you should come to our team to talk to us about that. And we can help guide you. And which use cases are always a no go. And this sort of lightweight code of conduct. adoption, I think does a couple of things. One, it makes folks feel more comfortable about how they’re engaging these tools and using artificial intelligence in their work. And they don’t feel like they have to sneak around and use, you know, ChatGPT on their personal login, they feel like the company has given them pretty clear guidance on what is okay. But I also think it sends a really strong signal that we want to adopt this stuff, and use it to improve efficiency to be at the vanguard of technology development. We just wanted to make sure we’re doing it thoughtfully and responsibly.
Marlene Gebauer 12:55
So hold that thought. You know, we sort of went through this this, like, initially, like everybody loves generative AI, and then we were like, everybody’s afraid of generative AI. And now we’re kind of at the point where it’s like, well, we kind of have some healthy skepticism about Gen AI. So, you know, in your view, well, you know, where do you think corporate government is lacking? And I mean, you know, you may have addressed that just now, when it comes to oversight. But given the, you know, the constant change of, of the tools, the constant change of how we interpret, you know, what our relationship should be with these tools, you know, how should those governance models evolve in a timely way?
Brandon Wiebe 13:50
It’s such a good question. So I would say, I think corporate governance, right now has a lot of policies in place already for for, you know, robust organizations, mature organizations, that touch on some of the risks presented by AI systems, for example, a company that has a really robust privacy governance, or is going to be able to vet AI tools for a lot of the privacy risks that are presented by those tools. And so I do think, you know, we think about AI as this totally new technology, but it shares a lot of similarities with other technology that we’ve encountered previously. Right. So I do think that there is some something that we can borrow from current corporate governance policies that will allow us to get part of the way there. But we know that, for example, thinking about the privacy governance example, we know that AI systems can present risks and never touch personal information at all. They can present risks around trustworthiness or, or bias without ever processing personal data in their training set or in their input or their output. And so that there are going to be areas where current corporate governance is going to miss some risks. And so my perspective on this is, you need AI specific governance models that include both organizational measures. So Code of Conduct policies and general guidance and principles around how to responsibly adopt and implement and build AI systems. But you are also going to need some sort of technical enforcement mechanisms, as well. And that probably starts with technical solutions that give you insight into what AI systems you actually are using. So you can catalog those, I analogize, this very similar to you know, the data mapping principle where you can’t govern the data that you’re processing unless, you know, you know where it resides, or what data it is why you’re using it. You cannot govern AI systems where apply a code of conduct, if you don’t know what systems you’re using, what systems your teams are building. And so a technical solution that at least begins to categorize those and gives you some observability, I think it’s gonna be really important. It also has the benefit of being incredibly flexible, because regardless of how your code of conduct may evolve, or regulations may end up evolving, knowing what systems you have, is always going to be step one, in order to do a gap analysis and, and begin to actually enforce a governance model on those things.
Marlene Gebauer 16:51
And, and how important is sort of communication as to the why the governance is the way it is, like, you know, not all organizations are created equal, I think, you know, some have have clients or the type of work they do that requires, you know, stricter privacy methods and others, you know, perhaps they have, you know, foreign regulation that applies to them. So, how important is the communication, given the fact that, you know, everybody, I shouldn’t say everybody, but you know, a lot of people want to kind of get on this bandwagon and use it.
Brandon Wiebe 17:31
I think communication is incredibly important. And if legal teams are leading the charge in this area, I think getting buy in from stakeholders at all levels of the organization is going to be really important. And that that comes with communicating and articulating really clearly, you know, what the risks are? And I don’t think the risks are exclusively, you know, regulatory, I will say, you know, I see a lot of folks say that AI is unregulated right now. And that folks are drafting regulations. And eventually, we’re going to get to a state where, you know, particularly in the EU, but we’re also seeing movement on the federal level in the US that we’re going to have specific discrete requirements around AI systems. I sort of scratched my head at that, because I think AI is regulated in many respects already. And from the privacy perspective, for example, we already have very clear guidance in the EU, and in the US around processing of personal information and what is acceptable and what is not. There are a whole bunch of other risks that AI systems present that will be regulated as well. But I don’t think we as you know, attorneys, who are advising our internal clients on this, have to wait for those regulations to be able to point to some regulatory risks. I also think it’s really important for legal leaders to be able to articulate risks beyond just regulatory risks. So there are brand and trust related aspects to adopting AI. And we saw at the time of recording this just last week, a very popular video communications tool, have an issue that arose around how they were processing data are how they stated they were processing data for their AI systems. And this was a an example and I don’t know internally, you know how that decision was reached at this organization. And I don’t want to speculate, but this is an example of where changes to how you are approaching AI and how you’re communicating that externally can vary significantly and very quickly impact your brand and So legal leader has been able to articulate that side of the risk equation as well, I think is going to be incredibly important in bringing, you know, executives along with you.
Greg Lambert 20:11
Now, and just hearing you talk and reading some of the articles, the blog posts that you that you’ve put online, I know that you have have stressed that you believe that the G, the governance in ESG, is poised for a drastic change with your legal teams helping lead the charge on this change. So what kind of specific governance updates are you seeing coming into play now?
Brandon Wiebe 20:44
Yeah, so I think we’ve seen a pretty similar trajectory with privacy where we started with organization organizations engaged in something like voluntary compliance, where they were simply trying to comply with privacy principles, as a baseline. Certainly in the US, we saw this, you know, before CCPA, came out, where organizations were, were really just trying to adopt privacy policies. Because there, there was sort of a an FTC, sort of Section Five II type of requirement for you to be truthful with what you were doing with, with personal information, but no other really regulations around that. And then, just in the last five years, we’ve seen us go from that to very sort of principles based privacy regulation, to privacy regulation. Most recently, I think CPRA. And all of the other state laws have come out as a good example, privacy regulation that is much more specific and technical in nature. And my guess, and again, I don’t want to prognosticate too much, because when I, whenever I do that, I usually get it wrong. But my guess is that we’re going to see something very similar for AI regulation and AI governance, where we go from voluntary compliance to principles based regulation to very technical and specific requirements. Except the only difference for AI is it’s going to happen much faster than it ever did for privacy regulation. I think we’re going to see that evolution happen in a matter of a couple of years, rather than, you know, the 20 plus years from, you know, the time that we had the privacy directive in the EU, to what what we see right now and in the in the landscape in the privacy landscape. So I do expect that we will see, governance, internal governance requirements have to become much more technical in nature. And as I was talking about earlier, I think technical measures and controls are going to be adopted. And I think the industry and regulators are going to coalesce around those much faster than we saw happen with privacy.
Marlene Gebauer 23:17
efficacy, it almost has to right. You talk to a lot of other general counsel in the course of your work, how are you advising them to take a leadership role on issues like ethical AI practices and data privacy?
Brandon Wiebe 23:35
Yeah, another great question. So I think GCS and legal executives are only going to be able to take this sort of leadership role. If they are going to do what is often a challenge for, for in house legal organizations wishes to be very proactive, and deeply engaged in the AI implementation and development process at their organizations. You know, oftentimes, we, as legal leaders, can be put into a position where we have to be very reactive to, to changes. The way I see that I think AI is going to require really deep technical understanding from legal teams. And it’s going to cover both legal and non legal functional areas. So as we were talking about, you know, some of the risks that are presented by AI right now, beyond things that are already regulated, sort of fall into a non legal area. But I think organizations are going to have to rely on their legal teams to tackle those. And then the legal team has been engaged really early on in the process is going to be key. And so when I think about those things like this deep technical understanding, coverage of legal and non legal issues, and really early engagement. What this reminds me of is a product counseling function in an organization. And for listeners who are not familiar with the product counseling concept, this is something a lot of technology companies have adopted in the last several years, I think it was started at Google, where you have a counsel in house that doesn’t have a necessarily one specific subject matter area that they are really deep on. So they may not be, you know, a patent counsel, they may not be a privacy counsel, but they have a really good general understanding of a lot of these subject matter areas. So they have breadth, and they’re a generalist, and they are embedded with a particular product team, so that they can learn that product and that technology really deeply and be able to issue spot and identify early on what what issues may arise during that product development process. I think organizations that are going to effectively govern AI, and legal leaders that are going to develop leadership in this area are going to have to follow a very similar model in how they work with their product teams, or, you know, whatever cross functional team it is that is implementing or adopting, or building AI technology. So my view is, you know, that early engagement, even if it’s very lightweight, even if it’s just a listening exercise in the beginning, that is where structuring this sort of legal, cross functional support is going to start.
Greg Lambert 26:47
So I saw where you had a recent announcement where you’ve created the AI risk assessment template, there transcend. So how, if I’m a GC at another company, how would you advise me to use this template to help me get kind of a head of any of the issues before they become a regulatory crisis?
Brandon Wiebe 27:14
Yeah, and I think that AI risk assessment template is a good example of a way that a legal team can engage early on in the process before a tool is built before a new tool gets implemented. So this is really just a lightweight questionnaire that, you know, we looked at a number of the different risk frameworks that are out there right now. So we looked at the NIST risk management framework. We looked at guidance from the OECD, we looked at the EU AI act drafts. And we tried to develop a set of questions that legal and product teams could answer together about a new AI system to begin to probe what potential risks existed. We also tried to draft this in a way where the steps included a fact finding portion, as well as a risk identification and mitigation portion, with the outcome being driving towards a released product or a released implementation that has properly balanced the risks that were identified, but gets to something that is launched. So the goal of these templates is not to slow down the innovative process is to document what’s happening, and to guide teams early on, so that they can avoid having to pull a product down the road or having to deal with, you know, a major regulatory crisis. When regulation gets passed eventually. The thing that I typically advise product teams, or other attorneys that are trying to engage with product teams, on this, on these types of issues, is early engagement is like pointing a rocket, a couple of degrees when it’s on the ground, rather than launch it into space, and trying to you know, now navigate it 100,000 miles in the opposite direction, it is much easier early on to make very small nudges one direction or the other, before you’ve built something. The inertia of an implemented system or a launch tool is so great, that it becomes very difficult to pull that back in or change it once it’s out in the wild. So I think again, early engagement is super important. And these sorts of risk assessment templates are a great way to guide teams on analyzing that risk and helping to guide early on in that development process.
Marlene Gebauer 30:01
So file this question under, you know, generative AI is replacing all our jobs. You know, how are AI tools influencing and reshaping the role and responsibilities of GCS?
Brandon Wiebe 30:16
Yeah, it’s it’s a great question. Again, I think, for attorneys themselves for in house counsel, you know, there are certainly plenty of AI tools out there that we are going to start adopting, to speed up our lives to make us more efficient. And teams that are open to doing that, in a responsible way, are going to accelerate and provide a level of impact and service that that is much higher than their peers who have chosen not to do that. Certainly, for any listeners of this podcast, this is top of mind for them, you know, I listened to several episodes as well. And there are so many great solutions out there for in house counsel to speed up a lot of internal process and time suck. That, historically has been what we have had to focus on, and allow us to become much more strategic and thoughtful. I think there’s also a lot of opportunity for lawyers that are in house and working with product and engineering teams to develop AI tech in house that can help them as well, I’ll just give you a quick example. At transcend. You know, we built a something we call privacy GPT internally, which is a tool that we have trained on all of the most recent privacy regulations and developments. Since, you know, the the GPT-4 cutoff date of September 2021. Most of the regulatory issues that I encounter on in terms of privacy, or AI, have developed in the last couple of years, they’re not something that you could go to, you know, your own version of ChatGPT. And, and get any sort of guidance on. So we began to train our own model on this, and have began to begin to use it internally. And again, when paired with a code of conduct and guidance around how to understand and actually use AI and know what the technology is, and know that it’s not going to give me a 100% accurate answer all the time. But it’ll give me a place to start my research that has allowed me to, you know, speed up answers and increase the level of service that I can provide internally, I think a lot of in house legal organizations are going to have to adopt that sort of mentality, or they’re going to get left in the dust.
Marlene Gebauer 33:00
And my question was kind of tongue in cheek, because I know most, you know, most general counsel and their departments are very overworked. And so this is actually a welcome a welcome solution.
Greg Lambert 33:14
When I present, though, find some some way to fill those gaps, and with more work. So So Brandon, I was reading just this morning about a new tool that transcend is developing called Pathfinder, with the idea of, again, helping businesses secure AI governance. Can you elaborate a little bit on how pathfinders? And from what I’m reading it has this customizable art architecture that allows for better compliance and risk management? How, how can you kind of give us a little bit of a intro to what Pathfinder does?
Brandon Wiebe 34:00
Yeah, happy to. So we’re really excited about this new tool that we’ve been developing. And we are right at the outset right now have a signing up early access partners to test this out. So Pathfinder is really meant to solve that technical side of the governance problem that I was talking about earlier, which is to to stitch observability and auditability, and governance into the technical layer of, of using AI systems. So the way that Pathfinder works, in essence is it acts as an intermediary between an organization’s tools that they are building internally, and the foundation model LLM that they are building those tools off of so for organizations that are using open AI to build an internal system or A tool or to build a chatbot, or to analyze data, whatever it happens to be. Right now, we see organizations begin to develop a lot of those simultaneously on many different teams, and not have great insight or oversight on how those are connected to all these different foundation models, what data they’re processing, where they reside. And you can do an impact assessment and document it manually that way. And I think that’s a good first step. But longer term, again, I think a technical solution is going to be important. And so Pathfinder acts as that interstitial layer that can observe and audit what is happening with these tools and how that information is flowing back and forth to these foundation models. And then longer term will be able to provide governance rules and layers over that, so that organizations can unblock unwanted contents, filter, you know, strip out personal information, that sort of thing.
Marlene Gebauer 36:09
How was transcend preparing for potential changes in the AI regulations? And how will how will this Pathfinder product adapt to those changes?
Brandon Wiebe 36:20
So we are keeping our ear very close to the ground. And, yes, and reviewing, you know, every new draft of of every proposed regulation that comes out right now, I think the the approach that we’re taking is similar to the approach we’ve taken with privacy governance, which is technical, technical solutions ought to be regulation agnostic in the way that they stitch into the technology itself. And what I mean by that is, legal teams, and oversight teams are going to need some sort of technical access, and insight and control over what is happening at sort of the ground level truth of the technology. So the actual data packets that are flowing. Once you have that established, once you have that infrastructure in place, it becomes much easier to quickly change the logic layers over the top of that, that govern what is flowing. So when regulations update, and we see this on the privacy side. So again, by analogy, I’ll talk to that. But when we when we see a regulation change from an opt in regime to an opt out regime for a particular type of data, right? If you have the ability to govern that data at the technical level, it’s a very easy switch to flip to say, well, now we’re going to require that for all of this data, we get an opt in rather than an opt out, or vice versa. Right. So so that that is a similar approach that we’re taking for for AI systems is once you are technically stitched in at that layer, it doesn’t so much matter where the regulations end up. Because repurposing the logic over that sits over the top of that is much easier than having to rebuild your whole infrastructure every single time.
Greg Lambert 38:26
Now, be honest, are you having the AI review all the regulation changes, proposed changes that are out there, give you a first draft?
Brandon Wiebe 38:37
Know, a first draft is always helpful, but I’ve seen this thrown out several times on LinkedIn and elsewhere in the industry, but you know, we view generative AI ai as the overeager intern. And so it’s a good first draft, and it really tries hard. But you always need to double click a little bit on on, you know, whatever you get from that.
Greg Lambert 39:05
I like that the over in turn. I like that a lot. Well, Brandon, we ask all of our guests at the end of the interview to our crystal ball question which is, you know, let’s let’s peer into your crystal ball and look into the future and let us know what changes or challenges do you see in the next two to five years that you as a GC and transcend as as a company may end up facing?
Brandon Wiebe 39:41
Again, I hate to prognosticate too much. But I again, I think the how swiftly AI technology has been adopted and the overwhelming inertia from, you know C suite to be at the leading edge of, of either developing their own AI systems or implementing them or even just re skinning what they have as an AI tool is going to require that GCS become experts in this technology. And if they can’t do it themselves, they’re going to need to hire develop teams internally that can become experts in the tech, I think becoming experts in that technology. And getting comfortable with that technology is almost more important than being an expert on all the discrete subject matter areas or legal risks. Because comfort with how the technology is actually operating, will allow legal counsel or allow legal counsel to provide much more specific and discreet and practical advice to clients. I see a lot of teams that look at AI technology with trepidation. Not because they think that there is are they not because they think they can point to a specific type of risk, but because they don’t truly understand how the technology is actually operating. And so I do think that, you know, legal teams that are going to be successful over the next two to five years are the ones who understand the technology at a really deep level.
Greg Lambert 41:38
All right, well, Brandon Wiebe, General Counsel and head of privacy at transcend. Thanks for taking the time to come on The Geek in Review and talk to us.
Brandon Wiebe 41:47
Thank you so much, Greg, and Marlene, really appreciate it. This was great.
Marlene Gebauer 41:51
And of course, thanks to all of you, our listeners and subscribers, for taking the time to listen to The Geek in Review podcast. We really appreciate your support. If you enjoy the show, share it with a colleague, we’d love to hear from you. So reach out to us on social media. I can be found on LinkedIn at gay Bauer M on x and m gay Bauer 66. On threads. We also have The Geek in Review account on threads.
Greg Lambert 42:14
That’s, that’s a lot of places anymore to be a lot of places I
Marlene Gebauer 42:18
gotta narrow that down next time next time.
Greg Lambert 42:21
So and I can also be reached at glamoured on x and glamour pod on on threads. And actually, I just finally got my count for blue sky two, which is also glamoured pods. So who knows now now we’ve got 15 different places to post.
Marlene Gebauer 42:41
Wherever you are, we’re there
Greg Lambert 42:42
we are. We are so Brandon, if somebody wanted to continue the conversation or learn a little bit more about what you’re doing where, where can they find you online?
Brandon Wiebe 42:52
Absolutely. Well I can be found on LinkedIn at bbb that’s B W i E, B as in boy E and transcend can be found email@example.com on LinkedIn at transcend and on x at transcend underscore IO.
Marlene Gebauer 43:11
And listeners can also leave us a voicemail on our kijken review Hotline at 713487782 and as always, the music you hear is from Jerry David DeCicca Thank you Jerry. Thanks Jerry. All right, Marlene. I buy by
Unknown Speaker 43:32
the way Hey welcome
Unknown Speaker 43:47
back. Devils backbone. Bill’s back