The Geek in Review podcast welcomed Kriti Sharma, Chief Product Officer of Legal Tech at Thomson Reuters, to discuss AI and ethics in the legal industry. Kriti talks to us about the importance of diversity at Thomson Reuters and how it impacts product development. She explained TR’s approach to developing AI focused on augmenting human skills rather than full automation. Kriti also discusses the need for more regulation around AI and the shift towards human skills as AI takes on more technical work.

A major theme was the responsible development and adoption of AI tools like ChatGPT. She discusses the risks of bias but shared TR’s commitment to building trusted and ethical AI grounded in proven legal content. Through this “grounding” of the information, the AI produces reliable answers lawyers can confidently use and reduce the hallucinations that are prevalent in publicly commercial Gen AI tools.

Kriti shares her passion for ensuring people from diverse backgrounds help advance AI in law. She argues representation is critical in who develops the tech and what data trains it to reduce bias. Kriti explains that diversity of experiences and knowledge amongst AI creators is key to building inclusive products that serve everyone’s needs. She emphasizes Thomsons Reuters’ diversity across leadership, which informs development of thoughtful AI. Kriti states that as AI learns from its creators and data like humans do, we must be intentional about diverse participation. Having broad involvement in shaping AI will lead to technology that is ethical and avoids propagating systemic biases. Kriti makes a compelling case that inclusive AI creation is imperative for both building trust and realizing the full potential of the technology to help underserved communities.

Kriti Sharma highlights the potential for AI to help solve major societal challenges through her non-profit AI for Good. For example, democratizing access to helpful legal and mental health information. She spoke about how big companies like TR can turn this potential into actual services benefiting underserved groups. Kriti advocated for collaboration between industry, government and civil society to develop beneficial applications of AI.

Kriti founded the non-profit AI for Good to harness the power of artificial intelligence to help solve pressing societal challenges. Through AI for Good, Kriti has led the development of AI applications focused on expanding access to justice, mental healthcare, and support services for vulnerable groups. For example, the organization created the chatbot tool rAInbow to provide information and resources to those experiencing domestic violence. By partnering frontline organizations with technologists, AI for Good aims to democratize access to helpful services and trusted information. Kriti sees huge potential for carefully constructed AI to have real positive impact in areas like legal services for underserved communities.

Looking ahead, Kriti says coordinated AI regulations are needed globally. She calls for policymakers, companies and society to work together to establish frameworks that enable adoption while addressing risks. With the right balance, AI can transform legal services for the better.


Listen on mobile platforms:  Apple Podcasts |  Spotify

Contact Us:

Twitter: ⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠, or ⁠⁠⁠⁠@glambert⁠⁠⁠⁠
Voicemail: 713-487-7821
Music: ⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠


Greg Lambert 0:07
Welcome to The Geek in Review. Podcast focused on innovative and creative ideas in the legal industry of Greg Lambert. Marlene Gebauer is out this week. So once again, you are stuck with me as the solo host. But I think you’ll enjoy our guest that we have on the show this week. So today we are honored and delighted to have Kriti Sharma, Chief Product Officer of Legal Tech at Thomson Reuters, where she leads the development and delivery of innovative and impactful products that leverage data analytics and AI. She is also the founder of AI for Good, an organization that creates intelligent, ethical and scalable technologies for social good. Kriti is a leading global voice on AI ethics and its impact on society and has advised governments, businesses and international organizations on how to use AI responsibly and inclusively. And she has given a TED talk on how to keep human bias out of AI and has been named in the Forbes 30 under 30 for advancement of AI. So wow, that’s a that’s a lot. Kriti, welcome to The Geek in Review.

Kriti Sharma 1:18
Hey, Greg, thank you so much for having me. And as a self identified geek, I feel like I found my people.

Greg Lambert 1:23
All right. I think we have a guest host in the making here hopefully. So, Kriti, we’ve worked with a longtime friend of ours, and actually our very first person on The Geek in Review, Zena Applebaum. And we asked her to come back onto the show. And as much as she wanted to, she said, you know, really, we should go and talk with someone from what she referred to as the trifecta of TR’s, a legal team. And so that was that was you as the chief product officer, Mike Dahn, who a lot of people from the show should recognize who’s the head of Westlaw Product Management. Or we should talk to Emily Colbert, who is Head of Product Management for Practical Law. And so as we were talking, I mentioned to Zena, that I love the fact that two of these three points in this trifecta are women. And I’m curious as to your thoughts on that. And actually, as Zena said, I should I should prod you on this as well. How do you think this type of Diversity in the leadership there Thomson, Reuters legal products affects, you know, how the organization is run? And how products are designed and maintained and enhanced?

Kriti Sharma 2:39
Yeah, so Greg, first of all, we really have an awesome trifecta that Mike, Emily and myself, and then Zena, and many other colleagues very closely working with us. I think what’s really special about Thomson Reuters is the inclusivity. And not only our Diversity of backgrounds, but also our experiences. We come from many different backgrounds. I, myself, have a computer science background, I started building robots when I was very young, as a kid. We talk about those robots another time. Now, I’m just really passionate about solving real world challenges that bring access to justice for all bring a more of a technology driven approach and AI background to this field. Mike and Emily have such incredible depth in understanding that product spaces, the world of legal the practice of law. And it’s just absolute incredible to see those different experiences and fused together. And the reason why it’s so important, Greg, not just the Diversity at a leadership level, but across the ranks across the organization. Because as we go into the world of AI, machines learn like humans do, and they’re designed, and they bring in experiences and knowledge from the creators from the underlying data sets. And if we are to build ethical, sustainable AI technology that we can trust, we got to have a diverse group of people building it, it has never been more important. And we’re very well aware of the challenges of biases in AI, the issues of sexism, or racism and all these challenges that we hear about and observe and experience in our daily lives. Now, when it comes to the next wave of AI technologies, we have to do this right. And one of the key questions here is who creates this technology? And is this group diverse enough in their backgrounds and their knowledge and their experiences?

Greg Lambert 4:33
Yeah, like I said, I was super impressed with the makeup of what Zena called the trifecta. Thanks for talking about that. Now, I want to talk a little bit I know there is just so much going on with legal technology and AI. And you know, we’ve been talking since November of last year, you know, on how it can enhance content. And I know that at Thomson Reuters has been working a lot on this as well, not just since November, but for many years. Now, I know there’s a lot going on with, with the regulatory actions coming up with the potential acquisition of CaseText. That was announced a few weeks ago. So we won’t go into any specific questions about that. Because I know you’ve probably been told not to say certain things about that, so we won’t. But with those disclaimers out of the way, I did want to ask, What is your perspective on TR’s research and content advancing is all of these generative AI and whatever’s next is coming along? What’s the kind of the path that you think TR wants to take?

Kriti Sharma 5:42
See, Greg at TR, we are very much at the forefront of applying AI into the most important tasks and jobs that our customers are doing. And our approach is very much of an augmentation of human work, we take a co pilot approach rather than an autopilot. It’s the companion that expands human skills and does the work for them and humans are always in control. And the reason why we went down this path is we deeply studied our customer needs, our advantage where TR can bring real value to our customers. And we strongly believe that in our ability to ground or train AI in trusted content, we can produce trusted answers. And say the biggest investment that TR is making, you’ve heard about our 100 million dollars per year commitment in the advancement of artificial intelligence, the biggest part of that 100 million dollars, is going into building trusted and responsible AI. And we don’t think it’s ever been more important to do this and to build this technology in the right way than today. At this point in time, because AI is going to change in profound ways how we experience services, how we live, and work in in our communities. And we want to make sure we do it the right way. Just expanding more on what that means in action is we are turning and bringing our content and research knowledge which is trusted, it’s well recognized, viewed spent to your point decades, enhancing and perfecting our approach there. We’re using that trusted data to train the AI models so they can give more clear answers. We can provide citations and links back to those trusted sources like Westlaw and Practical Law. And most importantly, drive adoption and make it easy for people to use this technology by bringing it right where they work. So an example of this is our partnership that we announced with Microsoft Co-Pilot for Word where, right where you draft, we can help you get to a first draft within minutes by bringing in the most trusted answers and content using the power of Westlaw practical law and our document intelligence service. And that loops back to what makes AI really work at scale and production and not just in a POC and pilot land is building it in a trusted and responsible way. And that’s what they’re all in for.

Greg Lambert 8:26
Yeah, I know that, you know, the trust issue is huge. In fact, I was talking with a couple of friends of mine yesterday about… first we started talking about the Goldman Sachs report that suggested that, you know, up to 44% of lawyers work could be replaced by AI tools. And and you know, and we were going back and forth. And I mentioned I said look, you know, we that trust that you talked about, I think people are suddenly feeling the lack of trust with the tools that are out there on the on the commercial market. And you know, this has been driving most of our general counsel’s and our security teams to just talk about the risk side of using generative AI for legal practice. And not only that, but then the uncertainty of whether data that is put into the AI tools by attorneys is really secure and maintains the confidentiality of the client and of the law firm. As we were talking, I was feeling like one there’s still a huge gap of legal professionals that have even used generative AI tools so far. And then there’s also the kind of this push back so are you seeing any type of you know, kind of pushback let’s let’s slow this down? And maybe just be temporary but but are you seeing kind of a cooling in the market when it comes to using AI tools in in legal practice?

Kriti Sharma 10:00
So, Greg, what we are seeing is instead of cooling down around here IV are seeing more for speed up towards thoughtfully created AI. And that’s, that’s an important nuance if you allow me to expand a bit, yeah, we went through the space in Cuba and and everybody was excited and to spy, more commercially, consumer facing commercially available tools like ChatGPT, being one of those prime examples of a foundation model that made the interface so easy that anyone could start using it and experiencing the power of these models. But when he got past that and realize, okay, there are all these challenges with this technology, and you can’t use it for every every use case. And so that was the reality hitting moment radicalist. Chatting was so Q1. Q2, and Q3 is all about now let’s make it real and make it work for us. And that’s where we see much stronger drive and enthusiasm and interest from our customers. And our partners really want to benefit from this technology, and they want to adopt it, but they want to do it safely, responsibly and thoughtfully. And that’s the big reference. And it’s not new. I’ve worked in AI for a very long time and many different industries. I’m newer to legal, I came to Thomson Reuters in the legal world, just over 18 months ago. But I’ve spent a very long time in my career working across various applications of in finance and in small businesses and health tech and social justice. And it is always the crux comes down to can humans trust this technology? Is it designed in a way where it doesn’t black box information, it starts to white box it. It’s gotta give you good show It’s working, it’s gonna show you where it learns information from can you go back to a trusted source to explore more? What’s the impact on humans can get started easily and get going. And it’s a real benefit to be generated by applications of this technology as being true to its capabilities and doesn’t know its limitations. And that responsibility of clarifying these questions, it very much leans on the developers or the creators of that technology such as ourselves, and also our customers and partners who really joining us in driving that change process across the board.

Greg Lambert 12:23
Yeah, I remember going to Mike Dahn’s presentation, I think it was for WestlawNext. And he was talking about how they were implementing what would have been AI at the time. And that was that the system was supposed to learn as people were using it, you know, and how to react to certain things, what should drift up to the top? And I remember people being worried about that probably, you know, eight years ago, though, and I know that’s just exponentially shot up since then. But I think, you know, a lot of us in the industry are really looking toward the big players like Thomson Reuters to bring us something to us that we do feel confident in.

Kriti Sharma 13:09
Greg, I would just add that we take that responsibility extremely seriously. We know that given our scale and the quality of the content, there is a responsibility for us to bring a solution to a market that sets the gold standard sets the tone of building and deploying quality AI solutions. And I’ve been just so impressed and inspired by the the partnership from our customers by teams were really at the forefront of building this technology in a trusted responsible way.

Greg Lambert 13:45
So you were on a terrific panel in June, sponsored by On Charting a Way forward for ethical AI: balancing innovation and responsibility. And in one topic you mentioned was that there is a significant lack of regulation around AI and that you were really disappointed by that. So are there any specific regulations that you would like to see the EU, the US or others implement? That would help kind of set the Trust for the technologies going forward?

Kriti Sharma 14:22
Yes, a great question, Greg. And one that I’m very passionate about.

Greg Lambert 14:27
I could tell from your panel that you are passionate about

Kriti Sharma 14:32
I think at one point I did say I’m deeply, deeply, deeply disappointed.

Greg Lambert 14:37
Yeah, I left out a couple of the deeply’s on that.

Kriti Sharma 14:41
Yeah, I did have to turn down to another deep please. But I I just think Greg if the future of this how this technology is deployed, the scale at which should not be left to individuals and geeks like me to figure out. At Thomson Reuters for example, with be obviously establish our AI policy and our standards and our governance and our principles and how we are going to create trusted, responsible ethical AI. But the true adoption of this technology for benefit of humanity and society and business is going to come from specific rules and guardrails that have to be put in place across all producers and creators of this technology, not just the ones who opt in to do this, right. And this is where the regulatory frameworks are very, very important. If we have to give people confidence that they can feel safe in using this technology for more positive outcomes. What we are seeing in the regulatory space, and I’ve spent a lot of time going through the EU AI act, for example, is that the regulators are taking at a somewhat divergent opinions or approaches to this technology. For example, in the EU, the draft EU AI act is taking a risk based approach to there are some applications that are outright banned. Others that are considered high risks, some are medium, and Others are low risk. And that comes as a framework, there are applications of AI that we shouldn’t be deploying in society, because it is unacceptable, for example, large scale surveillance and so on. And then there are others. For example, in the legal and information world where we can use trusted data, and ground AI in facts, we can really create access to justice and much bigger scale. And so the thoughtful approach around creating a tiered and risk based model is a great step forward, the UK has taken a different approach to the EU in that they are going to have historically taken more of a lightweight light touch approach to regulation, relying on existing regulators to figure it out. And the US is in somewhat earlier stages. What we really need, at a very fast pace, is global coordination around regulation for AI, then we bill in an increasingly global and digital world, when we build tools and solutions for people around the world, we need that coordination. So we can drive adoption in the right way and in this in a safe way. And that is my big call and ask for help and call for action for policymakers, business leaders to really lean in and get our point of view and our needs across through coordination between industry and policymakers and civil society.

Greg Lambert 17:27
Yeah, I think you’re right, it has to be a global approach. I don’t know how practical it’s going to be, given the current political climate, I think this is going to be a almost an AI arms race. Do you see that for countries that that are going to look for ways to give themselves an advantage and just disadvantage other countries?

Kriti Sharma 17:48
I think, Greg, from what I have seen for a very long time in the AI space is that this is a very polarized tech community. There’s a group of people, thinkers of our times, who believe the existential risk is far greater the longer term risks and yeah, they would put put out, us to slow down or pause, you know, and then there’s the other community of people and builders who are more concerned about making sure that we use this technology for the right applications, and we build it the right way. It’s more near term needs around eliminating bias from systems, creating more accurate AI that can help you be more effective in your work. And that world has been in, or that space has been across this spectrum for a while. And what I’m hoping we’ll get to, and we are getting a lot closer to it is meeting in the middle of. Let’s make sure the technology we’re building today is trusted, responsible, so we can briefly get the benefits out of it. And let’s start to work through regulatory frameworks that help us in the long term, and identify applications where we shouldn’t use this technology and we should use it more in others. This is a technology like any other technology, it’s more about what humans do with it.

Greg Lambert 19:07
I think I’m gonna use that quote at some point. You know, we’ve been talking about the development of in the tools and the things that are available, and the trust issue. But you know, this is going to be a huge shift and how, I’ll just limit it to the legal profession, but you know, how legal professionals do their day in day out job, you know, what kind of skills development and talent do you think we’re going to need going forward as the nature of work changes?

Kriti Sharma 19:40
Yeah, Greg, this is such an exciting time for people to come and build products. And I do believe it firmly. I went down this traditional part of in my career of building technology since I was a kid and then go into engineering school and then computer science and that part to really get to grips around how to build AI systems at scale and build large scale technology platforms, that game has changed. Now we’re moving past that teaching everyone to code movement. Yeah, that that kind of obsession, we had a three year old must know how to write code. We’ve moved past that, to this new world where AI can do that to a lot of that work of processing information or writing code, even Yeah, which is the most expensive form of knowledge where that’d be created. We’re now moving to this world where human role is to ask the right questions to get the right response back from the machines. It’s about curiosity, its emotional intelligence and empathy. Those human skills have never been more relevant and important than it is before. So And yet, there’s a very famous quote going around in the tech circles right now that “the newest programming languages is the human language,” so it’s such I mean, only speak two human languages, Greg, I speak 17 computer languages. So I’m gonna go retrain. There, this is such an exciting time, that barrier of getting into the world of technology or building a career that or creating new solutions and tools and products has gone down significantly. And I can’t wait to see a much broader group of people with many different experiences coming into this field. A few years ago, as I was building an AI product at the time, which is similar, Large Language Models, conversational system, we hired the first AI personality trainer slash conversation designer. And at the time, the that role was about help the machines learn how to talk to humans. So it’s a tone of voice, how to interpret their needs, and how to be transparent. And for example, declare that you’re talking to a machine and not a human and what the limitations are. And that was the most important and most impactful role that we had in the team. And that person had no background in tech, but they were excellent at language and communication. So we would love to see that. Yeah, I think we’re about to see that come in spades. And I, the technologist of the future, is gonna look different. They’re gonna come from many different backgrounds. And I think going back to your Diversity point, and your first question, this is how we’re going to solve it, I really, I really hope that this is this is how we solve some of the Diversity gaps or challenges we see.

Greg Lambert 22:28
Yeah, I wouldn’t have thought about that about having to train the personality of the AI, especially in interactive AI that you have, because I’m a solid Gen X person. And I’d fit that stereotype of, if you have to do more than two emails, pick up the phone, because you’re obviously talking past each other, not to each other. So I imagine that, you know, it’s the same way with the machine. And we’ll talk a little bit about this later. But I was watching a video on one of the products that you helped develop rAInbow, which clearly states your you are talking to a machine. And but it still allows people to do that interaction. And I think having that knowledge upfront, helps set the tone and confidence of the of the conversation going forward. So like you said, emotional intelligence, human interaction are going to be more important now than probably ever.

Kriti Sharma 23:24
Yeah, absolutely. And, Greg, the example you mentioned, of rAInbow, we launched this tool in prior to my Thomson Reuters life, to help people in vulnerable conditions get access to help that they needs. And in this case, rAInbow, specifically targets people who are at risk of violence, domestic violence, or gender based violence. And it is often a very difficult circumstance, taking the first step feels hard. There’s judgement, shame, stigma, issues, or challenges around getting access to services that often only work Monday through Friday 9am to 5pm. And will be found is by building a tool that’s not there to solve the problem, but provide information to help make taking that first step a bit easier, was the purpose it was designed to be it is designed to be transparent and clear that they’re talking to a machine what its limitations are. And when we did that, and it was clear about the human and machine roles and hierarchy of that relationship. We were able to impact and positively and augment some of those, those needs in a very difficult situation.

Greg Lambert 24:37
Well, since we brought up rAInbow, let me just jump into the conversation about AI for Good so you, you’re not new at all to the AI discussion. So you were inspired by your experience with the Obama Foundation Summit all the way back in 2017. To start an organization called AI for Good what happened at that summit that drove your desire to start AI for Good. And you know, what was the problem that you wanted the organization to solve?

Kriti Sharma 25:10
Yeah, it’s a great, it started from this realization that we are, we have such a profound and powerful technology in our lives. And that is available to us. And I just did not want to look back in time two years now and think the best thing we did with this tech is make people click more ads, or get them hooked to their phones. And it was just this at the same time, we’re seeing huge needs from frontline organizations we’re solving. So the hardest problems that our society and communities face, we didn’t have all the resources to scale. And so it originated and grew into a movement. And now we’ve evolved as even to the next stage of AI for Ggood as a movement and not just an organization. So whether you’re a business, you are a nonprofit, you are an individual, you can start to build applications of AI in a safe, responsible way that addressed some of those bigger challenges, or at least augment them. And, yeah, I have to admit, candidly, it is the most rewarding and the most terrifying thing I’ve ever done in my life, to take it to work on challenges of mental health and loneliness and violence into in a topic that I don’t know very much about, but using it as an opportunity to work with people who are so experienced in that world, in that field who have so much to offer, and just could benefit from technology supporting those needs. And I would say it’s been very humbling.

Greg Lambert 26:55
Now, can you tell the audience a little bit more about what AI for Good does, and kind of what you see it doing over the next few years?

Kriti Sharma 27:03
Yeah, it’s an AI for Good builds, products, solutions and experiences for some of the biggest social challenges that face our world, our communities. So we we’ve done a lot of work on supporting frontline organizations who address and provide services to people at risk of violence. We are deeply passionate about extending that to enabling access to justice, which very much aligns with my work at Thomson Reuters in the legal world, we do a lot of work on enabling and democratizing access to trusted information in in the mental health space for young people. Our focus has been historically been primarily in developing countries in South Africa and India. It will be seen over the last few years in particular, Greg is this shift to a movement in that there AI technologists, researchers, civil society members, business people, it’s such a strong movement that our people are very curious to learn how they can contribute or how they can use AI in a meaningful way. And our hope and dream is we’ll stop the conversation about AI over time, and we will all be talking about positive impact that we can create and solid sustainable development goals related problems that we can solve. And then ideal success will be it’ll be more about good and less about AI.

Greg Lambert 28:32
So, you know, there’s a lot of discussion on the potential for the new AI technologies to help underserved populations gain access to their legal services or health services. And I know you’re doing great work with AI for Good, but how do you seek big bigger companies like Thomson Reuters and others, taking this potential of access to justice, access to health care, and turning that into actual services that deliver to the community rather than just solely focusing in on how do I get more people to go go use Westlaw?

Kriti Sharma 29:10
Yeah, I think he definitely will. We’ll see more people using Westlaw because it becomes a lot easier to use it to do your work. I think the potential of AI is is across multiple domains, right? It’s about in your professional work, getting your work done faster, and more effectively and more efficiently and with greater confidence. That’s the work you’re seeing Thomson Reuters produce in the legal professional space with faster access to research and answers to questions and expertise in drafting. Then there is the opportunity of changing how and improving how our court systems work, for example, by giving them access to the same trusted information by making them more effective, giving them digital tools and that we do a lot of that work to our core products like Case Center, and then increasingly A big responsibility incumbent upon all of us is to then make this information more easily available to a broader community of, of people. And, for example, in the US, Thomson Reuters is the service called Findlaw, where we have over 100 million visitors who come to our site to understand their legal information rights and connect to a trusted attorney. And we will see more and more of that support grow towards large communities of people around the world. What I think is a real opportunity with AI, is how we can solve the unsolved or the underserved problems. Legal is definitely one of those worlds where there is such a big opportunity to do do more. And they’re passionate people who want to solve this problem that there is an underserved community, I really, really hope that with these advancements in AI, with the productivity improvements with the augmentation of human skills, we can bring quality legal services to more people

Greg Lambert 31:06
Tying into that, I know the question was around the potential for access to justice, access to healthcare, but part of that requires kind of a better understanding of the technology, what it can do, and experimenting with AI’s, AI tools to help governments or companies better understand the technology so that they can turn around and hopefully, kind of fill these needs. So do you think that having things like, you know, regulatory sandbox or experimental zones or experimentation zones, can be an effective approach to how we can learn what the tools can do?

Kriti Sharma 31:46
For sure. We’ve seen this successfully deployed in the FinTech world before, right when there was more collaboration between industry to strike the right balance between innovation and regulation. We also have a gap and availability of talent and skills to work with and for regulators to start. And so some of these questions and problems, we know how critical this skill set is, and how scarce it can be at times and in our world. So absolutely, I definitely think more collaboration between industry and government and policymakers. And I’ll pitch my dream again, Greg have more international alignment.

Greg Lambert 32:29

Kriti Sharma 32:29
I’m deeply, deeply, deeply disappointed, we really hope to see more of that coordination. And we all need to get out of our comfort zones and do more of it. And it’s incumbent upon it upon all of us to get this right. At Thomson Reuters, we are taking a leading position in the top leadership around this space in leading the conversation with our many different stakeholders in this space to really champion the needs of our customers to the best of our abilities.

Greg Lambert 32:57
Kriti we’re at the point of the show, where we ask everyone our crystal ball question. So I’m gonna ask you, I know you’re, you’re in a hotel room. So hopefully, you’ve packed your crystal ball with you. But if not just, you know, think, think out and look into the future for us. And let us know, what do you think are going to be some of the biggest challenges or changes in the legal industry? And I always say, over the next two to five years, maybe that’s a little too far out. But, you know, over the near term, and longer term, where do you what do you see as challenges?

Kriti Sharma 33:29
Since you said challenges and changes, I’m going to talk about three things that I deeply care about, I think it’s the most important, it’s a helpful framework to think about AI. First thing is we got to be very intentional about who creates AI. And by that, I mean, people from many different backgrounds who have a good mix of skill set who are diverse in their opinions, their experiences, and then knowledge. Yeah, I think it would be a real sad state. If it’s only geeks like me, like me, who create this technology where it needs to be a broad group of people from expertise in particularly in the legal world to understand us who are who really know that. And that should reflect in the data and the content that’s fed into these AI systems that content, it needs to be trusted. Second, is how we build it. It’s one thing to just create a POC or a quick wrapper on top of GPT-4 and check it out or, and quite another to build it thoughtfully to understand customer needs that that really need to be solved the right problems that need to be addressed. And create the transparency and trusted link backs or citations back into the information so you can use it confidently. And then the third one, perhaps the most important one, Greg is what do we use it for? Go back to that point where we build this technology to make people click more ads. Or we could use this to meaningfully in a significant way. drive productivity enhancements. give people time back or create easier access to systems or information or make it easier to enter into this profession and create more impact in a much more positive way. So my crystal ball would be that we will know the answers to the questions of who creates it? How do they build it safely? And what did it build it for?

Greg Lambert 35:20
All right, well, Kriti Sharma, Chief Product officers of Legal Tech, they’re at Thomson Reuters, thank you very much for taking the time to talk with us this morning. Appreciate it.

Kriti Sharma 35:30
Thank you so much, Greg.

Greg Lambert 35:31
And of course, thanks to everyone listening for taking the time to listen to The Geek in Review podcast. If you enjoy the show, please share it with colleague we’d love to hear from you. So you can also reach out to us on social media. Marlene can be found at @gebauerm on Twitter, I can be reached at glamour on Twitter and I’m now on threads so @glambertpod so I had to add the pod there to the end of it and not only that, but now the podcast has a thread account which is at @TheGeekInReview. Kriti if someone wanted to find out more and or can continue the conversation where’s a good place that they could find you online?

Kriti Sharma 36:10
They can reach out to me on LinkedIn and Kriti Sharma are on Twitter at @Sharma_Kriti.

Greg Lambert 36:16
So listeners you can also leave us a an old fashioned voicemail on our Deacon review Hotline at 713-487-7821 and as always, the music you hear is from Jerry David DeCicca who actually has a new album out so check that out. All right, thanks everyone.