Artificial intelligence has moved fast, but trust has not kept pace. In this episode, Nam Nguyen, co-founder and COO of TruthSystems.ai, joins Greg Lambert and Marlene Gebauer to unpack what it means to build “trust infrastructure” for AI in law. Nguyen’s background is unusually cross-wired—linguistics, computer science, and applied AI research at Stanford Law—giving him a clear view of both the language and logic behind responsible machine reasoning. From his early work in Vietnam to collaborations at Stanford with Dr. Megan Ma, Nguyen has focused on a central question: who ensures that the systems shaping legal work remain safe, compliant, and accountable?
Nguyen explains that TruthSystems emerged from this question as a company focused on operationalizing trust, not theorizing about it. Rather than publishing white papers on AI ethics, his team builds the guardrails law firms need now. Their platform, Charter, acts as a governance layer that can monitor, restrict, and guide AI use across firm environments in real time. Whether a lawyer is drafting in ChatGPT, experimenting with CoCounsel, or testing Copilot, Charter helps firms enforce both client restrictions and internal policies before a breach or misstep occurs. It’s an attempt to turn trust from a static policy on a SharePoint site into a living, automated practice.
A core principle of Nguyen’s work is that AI should be both the subject and the infrastructure of governance. In other words, AI deserves oversight but is also uniquely suited to implement it. Because large language models excel at interpreting text and managing unstructured data, they can help detect compliance or ethical risks as they happen. TruthSystems’ vision is to make governance continuous and adaptive, embedding it directly into lawyers’ daily workflows. The aim is not to slow innovation, but to make it sustainable and auditable.
The conversation also tackles the myth of “hallucination-free” systems. Nguyen is candid about the limitations of retrieval-augmented generation, noting that both retrieval and generation introduce their own failure modes. He argues that most models have been trained to sound confident rather than be accurate, penalizing expressions of uncertainty. TruthSystems takes the opposite approach, favoring smaller, predictable models that reward contradiction-spotting and verification. His critique offers a reminder that speed and safety in AI rarely coexist by accident—they must be engineered together.
Finally, Nguyen discusses TruthSystems’ recent $4 million seed round, led by Gradient Ventures and Lightspeed, which will fund the expansion of their real-time visibility tools and firm partnerships. He envisions a future where firms treat governance not as red tape but as a differentiator, using data on AI use to assure clients and regulators alike. As he puts it, compliance will no longer be the blocker to innovation—it will be the proof of trust at scale.
Listen on mobile platforms: Apple Podcasts | Spotify | YouTube
[Special Thanks to Legal Technology Hub for their sponsoring this episode.]
Email: geekinreviewpodcast@gmail.com
Music: Jerry David DeCicca
Transcript:
Greg Lambert (00:00)
Hi everyone, I’m Greg Lambert with the Geek in Review and we have with us today Sam Moore from Legal Technology Hub. And Sam, you’re going to talk to us a little bit about the technology selection offering that you have there.
Sam Moore (00:12)
Yep, that’s great. Thank you, Greg. So law firms and law departments come to LegalTech Hub every day to find out key information about LegalTech vendors and products. We’ve got so much information available to any kind of technology buyer, but sometimes we’re engaged in a much more hands-on capacity. We actually offer a technology selection service, which is a six-step advisory framework we’ve designed to help our clients choose the right solution for them.
And our approach is entirely agnostic, independent, and is designed to help our clients by giving them exactly what they need, whether that’s conducting stakeholder interviews and needs assessments on their side, through vendor demonstrations and shortlisting, all the way towards producing an executive report outlining our top three recommended solutions for their particular business challenge, ready to go to their decision makers to act upon. So if you have a technology selection
project coming up, please go to legaltechnologyhub.com slash advisory to learn how we can help.
Greg Lambert (01:18)
All right, well, sounds super useful. Thanks for putting a spotlight on that, Sam.
Sam Moore (01:21)
Thank you.
Marlene Gebauer (01:29)
Welcome to The Geek in Review, the podcast focused on innovative and creative ideas in the legal industry. I’m Marlene Gabauer.
Greg Lambert (01:36)
And I’m Greg
Lambert and this week we are thrilled to welcome a guest who’s tackling what might be the single biggest barrier to AI adoption in the legal profession today and that’s trust.
Marlene Gebauer (01:48)
Absolutely. Nam Nguyen is the co-founder and chief operating officer of TruthSystems.ai. His background is a unique trifecta, perfectly suited for this challenge actually. So a foundation in linguistics and computer science from UCLA, deep research into supervisory AI agents at Stanford Law, and practical at-scale experience shipping trust and safety tools for a major fintech unicorn.
So welcome, Nam.
Nam Nguyen (02:20)
Thank you
so much for having me here, yeah.
Greg Lambert (02:24)
So, let’s talk a little bit about that background ⁓ for a while. It’s that fascinating combination of this linguistics and computer science, AI research at Stanford Law, ⁓ and the practical trust and safety product management at the FinTech unicorn. So how did these distinct experiences, trifecta as we called it,
know, kind of shape the DNA of true systems as it is today.
Nam Nguyen (02:58)
Yeah, mean, happy to share a little bit more about our story and my story. So yeah, I grew up in Vietnam. I learned English actually as a second language. And I think throughout that process, it really kind of sparked.
the fascination for me of like how many humans actually acquire and can programmatically represent language. So that kind of segues perfectly into the linguistic and computer science background I had at UCLA. I also did a lot of NLP and AI research back there as well. And I think there I got to reconnect with my co-founders. He basically left the block away from my childhood home back in Vietnam. And yeah, it was a very serendipitous moment. Back then we were just like…
playing around with the early GPT-3 models and I think like everyone we just knew how capable ⁓ these technology will be. But from a research perspective, what we were ⁓ thinking is that like the field will be so focused on answering the questions of…
who will build the most performant models and applications. And I think me and my co-founder Alex wanted to ask or answer the questions of who is making sure these systems are safe and trustworthy. And I think that question kind of like brought us naturally to Stanford Law, where we met Dr. Megan Ma and there we got to work with her on the supervisory AI agent project, as you mentioned. And there we started tackling, as you said, the number one problem for law,
adopting AI, was trust, and our first pass at it was, okay, can we embed legal and ethical guardrails into…
every AI interactions for lawyers. Very fortunate to have presented this work to congressional members, EU policymakers, and some risk functions at some AMLO 30 firms. I think at those presentations, the reaction we saw made us realize that the need for trustworthy AI governance is urgent. Like it can’t wait for another academic paper. And I think that’s how truth systems really spun out to be.
And going back to your questions of how that kind of turned into the DNA of truth systems, think for first and foremost, research is so, important as Alex and I are both former researchers, but also our founding engineer, Mikul Wai, was previously the head of AI research at Lagora.
I think the difference here is we are trying to tackle it in a very pragmatic way. So we don’t want to just theorize about this topic. We’re trying to build the infrastructure for trust and safety that makes it real and actionable for law firms today.
Marlene Gebauer (05:38)
Yeah, I imagine that message resonates with firms because it is something that they wrestle with all the time in terms of are these things safe and is our data safe? We’d love to use these things, but ⁓ so hearing that you’re focusing pragmatically I think probably ⁓ resonates quite well with them.
Nam Nguyen (05:47)
Mm-hmm.
Yeah, no, I definitely agree. think it’s just been ⁓ very fortunate for us to see kind of the response because everyone wants to innovate and innovate fast. But I think at times it’s very easy to have a lot of these projects go out of control. And our goal is to allow firms to exert controls over the exciting pace of this technology.
Marlene Gebauer (06:27)
I what has been just in general? Like, what have they responded? I’m just curious.
Nam Nguyen (06:33)
Yeah,
I think it’s just kind of, we’ve seen such a spectrum of response because we see really a spectrum of where firms are in their AI adoption journey, right? There are so many firms that like they’ve kind of waited up until now to get going on that. ⁓
And for us, we can come in and help them set up, okay, let us help you align that to your outside council guidelines. Let’s make sure that you have the proper access controls and guardrails to properly try out this technology. And then for firms that are more mature in the adoption journey, then we get to think through, okay, how can we make governance a competitive advantage for you? Where we go just beyond on, you know, setting the right controls, but also showing to your clients.
these controls as well. ⁓ I think, you know, being able to look at that spectrum of where people are in their journeys and how can we kind of bring them together and move along in the technological progress chart, I think that has been super, super exciting for us at Truth Systems.
Marlene Gebauer (07:40)
So I’m very intrigued, Nam, on your website you state ⁓ a vision where AI serves as both the subject and the infrastructure of governance. can you tell us a little bit what you mean about that?
Nam Nguyen (07:53)
Yeah.
That’s definitely one of my favorite things to talk about because when we say AI should both be the subject and infrastructure of governance, know, obviously we mean two things, right? First, AI as a subject, think a lot of people agree it’s a very powerful technology that needs oversight, accountability, and guardrails. And you know, like any powerful technology here, it just should be governed very responsibly. But I think the exciting part here is how can AI be the infrastructure for that?
If you look intuitively into what large language models really excel at, it is first, interpreting language.
then being able to process a vast amount of unstructured data and I think these two traits alone makes AI the ideal candidate to help implement governance itself So, you know, we you guys probably talk a lot about the AI Co-pilot to lawyers where it’s kind of like an extra pair of arms Helping lawyers with tasks such as research review, you know at Sanford and what we’re doing right now is thinking about how can we build a AI risk supervisors where we’re building an extra pair of eyes alerting lawyers
of risk in real time, such as violating potential confidentiality, compliance, or even ethical violations. I think when we talk about these two paradigms, it’s about regulating AI not from the outside, but to make AI part of that solutions, where governance can lives and breathe within the workflow, and we can transform trust and safety from something people say or just a checkbox that we’re trying to get through to a dynamic system that learns and adapts.
and evolves with the organizations over time.
Greg Lambert (09:37)
Yeah, well, we’ve been looking for decades for things that can do well with the unstructured data because we get tons of unstructured data to work with. ⁓
Nam Nguyen (09:45)
Yeah. Yeah.
Greg Lambert (09:49)
Well, let’s talk a little bit about the controversial ⁓ conversations that are going around about things like hallucinations. we talked about hallucinations as being these flawed outputs from the AI models where it takes something that you’re asking it, and it just kind of makes something.
up or gives an answer that maybe the person wants to hear. however, in true systems, with your charter, ⁓ I guess what you call charter like the program or how do you refer to your, what do you refer to as charter?
Nam Nguyen (10:33)
Yeah, so Charter basically at the core is a platform that allows firms to deploy access controls and guardrails to all of their, you know, ⁓ vendor AI applications or even like programs that they’re building themselves. And the way we manifest in the lawyers or paralegals or staff workspace is we are a browser extensions that basically can help firms monitor and alert before they can potentially violate any firms restrictions.
or client restrictions in real time.
Greg Lambert (11:05)
Yeah, so ⁓ one of the things that charter does is kind of goes in preempts potential issues. ⁓ And so why is it, I think just as critical to govern the types of prompts that a lawyer writes just as it is to verify the answers that the AI provides and how does this kind of holistic approach to trust and safety ⁓ approach kind of set you guys apart from the market?
Nam Nguyen (11:36)
Yeah, I mean, we do believe that just in regulated fields like law, the biggest risks actually begin with the input, right? You know, obviously take hallucinations for example.
We were trying to tackle that problem for a long time. And when we’re in the environments talking to firm and understanding what people do, what we found often happens is because people were actually going to public platforms that aren’t connected to case laws or verified sources. That’s such a low hanging fruit. And then you look at a single prompt itself, right? It’s, very tricky at Stanford. When we saw an instance, ⁓ a law firms we were working with, their client has told the law firms to only use AI after they
approved it. Now, one of the associates at the time actually drafted a memo in ChatGPT, unapproved tools, and it was riddled with hallucinations. Now, it was good that the firm were able to catch it beforehand, but I want to take a beat here and just imagine what could have gone wrong. Like, if that slipped across the line, that’s newsworthy headline, right? But
Marlene Gebauer (12:36)
you
Nam Nguyen (12:40)
There’s also a fundamental level of violations here where you’re going against the rules of engagement you have with your clients for using those unapproved tools. That’s the kind of breach in trust that I think will be very hard for a firm. ⁓
to restore. And I think we build charter to govern that entire input layer from managing access to different platforms to governing the behaviors within them so that we can allow firms to translate these policies and client restrictions into the enforcement tools that they need to step in the moment before someone could breach confidentiality or compliance.
Last part to it is when you talk about this holistic approach to AI safety I think that’s where truth systems really want to differentiate because we don’t want to just exist by preventing bad answers or outputs We want to build systems or help firms build systems that make every interactions from the first inputs to the final result safe and compliant and it should be held to the highest values that the legal profession always upkeep
Greg Lambert (13:48)
⁓ Yeah, I mean, kind of makes sense for, well, I mean, it definitely makes sense when we’re talking about unapproved, you know, using chat GPT or Claude or Gemini that’s not approved. Is there anything that we’re missing as far as like approved tools? So is there something we should also be aware of when it comes to say using Microsoft Copilot or Harvey or even the approved
Marlene Gebauer (13:59)
That like shadow use, yeah.
Nam Nguyen (14:01)
Yeah.
Mm-hmm
Greg Lambert (14:17)
tools is there anything that we need to be thinking about even with the approved tools when it comes to prompting and how we’re using it?
Nam Nguyen (14:25)
Yeah, I think the way that I always talk to firms about this is that ⁓ what you are having in your hands is a host of tools that can do basically everything, but can it do everything well? Probably not, right? ⁓ I think the kind of analogy I always make is that, hey, you kind of have a very powerful hammer and you kind of want it to just nail down the right spot.
But then because sometimes you can cook an omelet on it, don’t want to use it to cook an omelet, right? So going back to the example of copilot, ⁓ yeah, copilot is very powerful in every single interactions, but it’s not suitable for you to do legal research on them. And one of the things we like to work with firms, or we’re already working with firms with is how can we enforce behavioral guardrails, making sure that you’re using the right tools for the right use cases. So essentially allowing firms
to invest that, this is a tool we want it to be used this way, and we don’t want any of our attorneys or staff to use it in any other ways that the tool shouldn’t be supporting. And I think that’s where we also want to focus with the approved tools as well.
Marlene Gebauer (15:39)
There’s a couple points I’m thinking about when you talk about this. Like firms saying, okay, we only want you to use these types of tools. But as you said, that tool might not be good to cook an omelet, that’s maybe where your innovation or your KM or your research people are coming in and giving information about like, yeah, these tools are good for this and these other tools are good for something else. So I think there’s an opportunity for those groups to kind of work and provide their expertise
Nam Nguyen (15:50)
Yeah.
Mm-hmm.
Marlene Gebauer (16:09)
expertise to sort of help with that guidance. And I also, the other thing that struck me is, it’s hard to figure this stuff out because you have a multiple of, have a, I know, stating the obvious, right? It’s hard. It’s like, you have a multiple areas where you’re trying to gather this information. Like you have outside council guidelines and you have the firm policy and then there might be more than one policy. for a practitioner, it’s kind of like,
Nam Nguyen (16:19)
Yeah ⁓
Greg Lambert (16:22)
It’s hard.
Nam Nguyen (16:34)
Mm-hmm.
Marlene Gebauer (16:39)
what do I do? And this type of tool combines all of that information and kind of helps you as you’re doing the work and sort of directs you in the right way.
Nam Nguyen (16:40)
Mm-hmm.
Mm-hmm.
Yeah,
100%. And I think this ⁓ idea for us on how can we operationalize governance, not just something where firms come together once every quarter or like, you know, have multiple meetings just to draft up a static policy documents.
then it becomes something that just rots or being forgotten in a SharePoint somewhere. We want like policies and governance to lives at every single interactions to guide the users in the right directions. And I think it goes back to, know, when we think through the DNA of what we want truth systems to be is that how can we fundamentally shape the way
users or in this case attorneys interact with AI in different software and can we impact it in a positive way that allows us to use these very powerful tools responsibly?
Marlene Gebauer (17:45)
Yeah.
So I want to get back to hallucinations again. And you’ve written about the limitations of ⁓ RAG, retrieval augmented generation, saying it’s not a silver bullet for ending hallucinations. ⁓ Now, there is some marketing out there for some systems saying they are hallucination free. I’m setting you up. I’m just setting you up. So based on your research, ⁓ where does
Nam Nguyen (17:49)
Mm-hmm.
No.
Yeah, asterisk. Yes.
Greg Lambert (18:09)
That’s direct.
Marlene Gebauer (18:17)
simple RAG implementation falls short, ⁓ how has that understanding informed truth systems in terms of providing a more robust layer of verification?
Nam Nguyen (18:27)
Yeah,
⁓ I think that’s a topic that we’ve cared very deeply about.
I think retrieval augmented generations often is marketed as a fix for hallucinations, but it’s not really the problem exists at both the R, the retrieval part and the G, the generation stage, right? I’ve written on the retrieval problem where legal context is super, super tricky. you know, a lot of firms like context engineering is major in the Valley right now. because, you know, for legal, there’s similar costs, jurisdictions, definitions that, you know, rack pipelines can easily pick the wrong
context because it might see it as the right one. And I think because the model generates it very confidently, but it’s based on kind of the wrong evidence. The generation problem actually is interesting because OpenAI actually recently produced a paper called Why Language Models Hallucinate. And it’s explained that LLMs, especially the reasoning models these days, are trained on reward functions, which guides models’ propensity to generate each answer. And the reward models basically are penalizing
these language models for uncertainty. So they actually lose points in training for saying, don’t know. And why is that? Because all of these different benchmarking that we’ve provided, it’s marketing tools and different labs just want to chase those higher benchmark scores. So in the race of chasing better marketing headlines, we’ve literally taught our models to answer even when they shouldn’t. And I think…
This is where our approach is in the opposite direction. We want to make smaller but more predictable models that are rewarded for spotting contradictions. And I think when you try to design models that values accuracy over confidence, think verifications would triumph velocities and marketing headlines in this specific instance as well.
Greg Lambert (20:29)
Yeah, one of the problems, and I don’t think I’m saying anything that’s that controversial here, but the rag technology for legal research has not been good for the last couple of years. ⁓ It feels like it’s gotten a little bit better, ⁓ and I think part of that is the underlying foundational models have gotten better at kind of going back and as far as that retrieval part.
Nam Nguyen (20:36)
Mm-hmm.
you
Mm-hmm.
Mm-hmm.
Greg Lambert (20:55)
If it doesn’t get the right stuff the first time, can go back and supplement that. ⁓ But now the generation part, now you’ve got me doubting myself whether it’s ⁓ improved or it just sounds like it’s improved. ⁓ Have you noticed any significant improvement or changes in how it works with rag technology and legal?
Nam Nguyen (21:00)
Mm-hmm.
Yeah.
Mmm
Yeah, I think when you think through the problems or reward functions right now, it basically goes out to thinking through
I guess, where would you bet in terms of scaling laws? So scaling laws is basically, you can, can you just, you know, add more compute and it will automatically means better models. And I think people are very divided on, you know, how scaling laws would play out. ⁓ the reason why so many, you know, labs or companies that are creating evaluation tools right now is getting a lot of money because people can bet that if you can continuously produce the right evaluations, scaling laws,
allow you to make the models generate better answers and that is hopeful on that end. But if you don’t really bet on scaling laws and a lot of people have like come out and talk about this for example even you know Richard Sutton the father of reinforcement learning he he is saying like maybe large language model isn’t the paradigm to solve this and I think as much as ⁓ you know I’d like to be hopeful
And also I kind of want to listen to Richard Sutton too. At end of the day, the answer is we will kind of have to wait and see. I do see that every vendor is getting a little bit better at it and hopefully with the scale and the interest and the money that is going into this field that we will get into an endpoint where it is good enough, I believe. Yes, yes.
Greg Lambert (22:53)
It’s really expensive to be good enough.
Marlene Gebauer (22:55)
Yes.
Greg Lambert (22:59)
Well, some good news for you. I know you’ve just announced a major $4 million seed round led by Gradient Ventures and Lightspeed. So yeah, congrats. So, you know, let me politely put, you know, now that you got the money, what are you going to do with it? So,
Marlene Gebauer (23:08)
Congratulations.
Nam Nguyen (23:09)
Thank you so much. Yeah.
Hahaha
Greg Lambert (23:20)
What does the infusion of capital do? Just as importantly, is this a vote of confidence from top tier VCs to enable what you do next? So ⁓ let me just go back here. Now that you got the money, what are you going to do with it?
Nam Nguyen (23:37)
Yeah,
yes, the million dollar questions essentially, right? ⁓ Yeah, yes. ⁓ Yeah, I think for us the interesting problem set that we’d like to solve right now is, okay, ⁓ how can we help firms?
Greg Lambert (23:42)
Yeah, four million in this case.
Marlene Gebauer (23:43)
Yeah, the $4 million question.
Nam Nguyen (23:56)
with this amount of investment to move from adopting AI to governing it in a measurable and client facing way. So I think now we want to help everyone to go above like, okay.
I just bought this tool. went on a press release on, just bought this tools to now being able to show that, okay, how am I using it? And am I using it responsibly? And I think just this deepening visibility and assurance to clients and for us to invest in expanding the product developments around this real time visibility into how AI is being used across, you know, all your vendors, your clients and matters. We want to help firms demonstrate compliance and alignment with these
expectations. And I think for us to partner with innovations and risk teams inside of these firms to co-develop and test these capabilities in live environments, that’s where we’re part like when we really want to make trust and safety not just works in practice but also in like not not just work in theory but also in practice as well. And I think overall it’s just yeah I think it’s no longer sufficient to say you’ve adopted AI.
the real competitive advantage is like, can you show that you’re using it in a way that adds value to the client and it’s responsible. And we’re investing to give the firms the ability to show, not just tell how they’re innovating with, um, know, transparency, accountability, and essentially what happens at the end of the day is just trust at scale.
Greg Lambert (25:28)
Do you see this being built into the engagement letters with clients? Here’s not just how we’re using the AI, but how we’re actually governed, making sure that we’re complying ⁓ both with our internal rules and your rules and other guidelines. You see that as being significant enough to be kind of embedded in the engagement letters?
Nam Nguyen (25:53)
Yeah, I think engagement letter is one thing, but even just business development materials too. By virtue of having all this wealth of data you guys are able to collect of saying, these are the different ways our attorneys are using AI. This is where they’re using it in these different softwares. And hopefully we have the ability to work with Firm on benchmarking it against like, what is the baseline? Then essentially you have a very compelling business case of, Oh, look at what we are doing right now.
the thing that we’re doing is a lot better than the existing status quo. And I think that’s just deepening the trust between the clients and the little law firms so much that the firms that are investing in this area will be the one we hope to come out of this race and win it.
Marlene Gebauer (26:42)
So.
many of the big players in legal AI, operate as sort of walled garden, so they promise safety from within, know, we’re a closed system. Now, you have taken a different approach with truth systems, so it’s platform agnostic, it acts as a universal governance layer for any AI tool that the firm uses. Now, I’m interested sort of, why did you sort of take that tact when sort of the rest of the market seems to be taking, you know,
Nam Nguyen (26:48)
Mm-hmm.
Marlene Gebauer (27:13)
the other way. ⁓ What’s the strategic thinking behind this independent auditor model? How do you convince a firm that’s already spent a whole lot of money on an integrated platform that they need this third party verification tool?
Nam Nguyen (27:29)
Yeah,
I think the ⁓ industry reality is, first of all, think Harvey and co-councils are building good products and they’re building it in a very powerful manner and as you said in very contained systems, right? But governance is a slightly different paradigm because governance is only as good as its coverage.
And I think you guys have seen this played out many times before, like even when firms are buying a Harvey and a co-counsel, they won’t live inside a single platform. Our bet is that they’ll keep adopting new tools or even these days with ⁓ firms even acquiring tech vendors, right? You see firms will build even custom AI workflows internally as well. So I think from your seat, you guys probably seen this field sickly involved many times and I think our goal is
to give firms the ability to put…
their own governance controls on top of any platforms. So firms can kind of define, enforce, and evolve with their own policies that they don’t have to wait for another vendor’s roadmap. And I think this independent governance layer, it’s not about standing apart from all these powerful vendors, but I think it’s more to align with firms and empower them to own their own ecosystems. So whether that is a Harvey, a co-counsel, or even their own app,
The firm’s ethical standards, their client restrictions, the confidentiality rules move with them. And I think the outcome, what we want is the freedom that comes with the accountability and the ability to innovate confidently, knowing that your governance stay consistent wherever your AI lives.
Greg Lambert (29:11)
Before we get to our crystal ball question, we’ve been asking folks, besides the geek in review, do you look at to kind of, ⁓ know, kind of news or blogs or ⁓ other ways are you helping yourself keep up to date with the rapid changes in legal tech and AI?
Nam Nguyen (29:19)
Yeah
Marlene Gebauer (29:20)
Thank you for doing that.
Greg Lambert (29:39)
and in your case, even risk management, what are your go-to resources? ⁓
Nam Nguyen (29:45)
Yeah. So I guess in the legal tech field and just voices in legal that I admire in general, I think I really much look up to Jayom. I think the way that she articulates that firm must leverage AIS, a differentiator definitely strikes a huge chord because essentially what we are in this race for is the race with also our own clients because the in-house councils, they already have access to the same tools. So the edge comes from how can you apply
AI strategically to move up market as well. So I think she ties just so brilliantly about how we should innovate the business model and not just sheerly adopt the technology. For matters related to AI, I actually take a slightly contrarian view. I just kind of believe that if you chase every new trends and new publications going out there, you essentially will chase the marketing cycle. And I think, you
true understanding comes here from revisiting a lot of fundamentals. So I mentioned earlier, Dorkesh Patel has a beautiful interview with Richard Sutton, father of reinforcement learning. And I think it’s just kind of a masterclass on how do you revisit first principle in a world that is so much noise. And yeah, lastly, think AI governance, risk management, give a very quick shout out to Tara Waters and Ryan McDonalds.
two very powerful and thoughtful voices in the space that just kind of give a lot of high quality grounded content on, know, how can you build or so deploy AI systems that are trustworthy? And I think they does a very good job of like, ⁓ showing small examples where you can try it out yourself as well.
Marlene Gebauer (31:34)
All right, Nam, we are asking you the crystal ball question that we ask all our guests. So ⁓ what’s the single biggest structural change you think AI governance tools like truth systems are going to force upon legal professionals’ relationship with technology? Is the industry is currently constructed prepared for that change?
Nam Nguyen (31:37)
Yes.
Greg Lambert (31:57)
I bet I know
Nam Nguyen (31:57)
Hmm
Greg Lambert (31:58)
the answer.
Marlene Gebauer (31:58)
No, no
Nam Nguyen (32:00)
No, it’s not no we were the same as we are yesterday I Think overall You know what AI governance will do is it’s going to bring a whole new level of transparency and Intelligence to to the legal professions as we’ve never seen before
Marlene Gebauer (32:00)
it’s not.
Nam Nguyen (32:22)
Because as I mentioned a lot earlier, firms will finally see how AI is being used across their entire ecosystems. So not just the tools they bought, but also the public platforms and seeing then how they’re being used by whom and is their impact here. And I think a lot of voices has come out and saying like, hey, at this age, the data is your competitive advantage. And insight into these usage patterns, the adoptions and the outcomes will be how firms differ.
And as I said, like one of the most exciting projects we’re doing right now is actually helping firms measure AI usage across all their vendors to go and say, like show to their clients and show how it affects the outcome. Because our philosophy is yes, know, truth systems at the end of our core, end of the day, our core is a GRC platform first. And we’re just so fortunate to start out in law and have law firms as our first big partners.
Our goal is to not have to partner with every law firms, but we want to serve all the law firms that we serve to be successful. And all the law firms that we serve has the capability to win. So the structural shift here, I would argue is that governance and compliance will no longer be blockers. I think in this new age, we want it to become the differentiators because…
then it’s not about slowing innovation down, it’s about how you’re scaling it responsibility with transparency and trust at the core. And yeah, I think that’s my crystal ball in the future that truth systems would like to build.
Marlene Gebauer (33:58)
So
differentiators, not blockers.
Nam Nguyen (34:01)
Not
blockers, yes.
Greg Lambert (34:04)
All right, well, Nam Gwyn from TreeSystems.ai. I want to thank you very much for taking the time to talk with us today. This has been fantastic. Thank you.
Marlene Gebauer (34:14)
Yeah,
really good.
Nam Nguyen (34:16)
You know, thank you Greg and Marlene for giving me the platform. ⁓ Really enjoy being on the show and also just have been a fan for forever.
Marlene Gebauer (34:26)
Thank you to all our listeners for taking the time to listen to the Geek Review Podcast. If you enjoy the show, please share it with a colleague. We’d love to hear from you on LinkedIn and TikTok.
Greg Lambert (34:37)
All
right, and Nam, if listeners want to learn more about your work and the work there at True Systems, where’s the best place for them to go?
Nam Nguyen (34:46)
Yeah, obviously, you know our website truth systems dot AI for you to schedule time to you know, if you just want to chat with me about you know, how we think through AI governance. I also try to post through a lot of my thoughts on where the industry can head to the capabilities of this technology on my LinkedIn as well. So feel free to give me a follow at Nam Nguyen on LinkedIn. yeah, super excited to be here again. And hopefully we’ll get to hear more from ⁓
all the audience on how they think AI governance can shape the future of the legal profession as well.
Marlene Gebauer (35:23)
It’s been a delight having you on the show. And as always, the music you hear is from Jerry David DeCicca Thank you, Jerry, and bye, everybody.
Nam Nguyen (35:34)
Thank you guys so much.
