This week we welcome Jiyun Hyo, co-founder and CEO of Givance, for a conversation about moving legal AI past shiny summaries toward verified work product. Jiyun’s path runs from Duke robotics, where layered agents watched other agents, to clinical mental health bots, where confident errors carry human cost. Those lessons shape his view of legal tools today: foundation models often answer like students guessing on a pop quiz, sounding sure while drifting from fact.

A key idea is the “last ten percent gap.” Many systems reach outputs that look right on first pass yet slip on a few crucial details. In low-stakes tasks, small misses are a nuisance. In litigation, one missing email or one misplaced time stamp risks ruining trust and admissibility. Jiyun adds a second problem: when users ask for a tiny correction, models tend to rebuild the whole output, so precision edits become a loop of fixes and new breakage.

Givance aims at that gap through text-to-visual evidence work. The platform turns piles of documents into interactive charts with links back to source files. Examples include Gantt charts for personnel histories, Sankey diagrams for asset flows, overlap views for evidence exchanges, and timelines that surface contradictions across thousands of records. Jiyun shares early law-firm use: rapid fact digestion after a data dump, clearer client conversations around case theory, and courtroom visuals that help judges and juries follow a sequence without sketching their own shaky diagrams.

Safety, supervision, and security follow naturally. Drawing on robotics, Jiyun argues for a live supervisory layer during agentic workflows so alerts surface while negotiations or analyses unfold rather than days later. Too many alerts, though, create noise, so tuning confidence thresholds becomes part of product design. On security, Givance works in isolated environments, strips identifiers before model calls, and keeps architecture model-agnostic so newer systems slot in without reopening privacy debates.

The episode ends on market dynamics and the near future. Jiyun sees mega-funded text-first platforms as market openers, normalizing AI buying and leaving room for second-wave multimodal tools. Asked whether the search bar in document review fades away, he expects search to stick around for a long while because lawyers associate a search box with control, even if chat interfaces improve. The bigger shift, in his view, lies in outputs, more interactive visuals that help legal teams spot gaps, test case stories, and present evidence with clarity.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Transcript:

Greg Lambert (00:00)
Hi everyone, I’m Greg Lambert with the Geek in Review and we have with us today Sam Moore from Legal Technology Hub. And Sam, you’re going to talk to us a little bit about the technology selection offering that you have there.

Sam Moore (00:12)
Yep, that’s great. Thank you, Greg. So law firms and law departments come to LegalTech Hub every day to find out key information about LegalTech vendors and products. We’ve got so much information available to any kind of technology buyer, but sometimes we’re engaged in a much more hands-on capacity. We actually offer a technology selection service, which is a six-step advisory framework we’ve designed to help our clients choose the right solution for them.

And our approach is entirely agnostic, independent, and is designed to help our clients by giving them exactly what they need, whether that’s conducting stakeholder interviews and needs assessments on their side, through vendor demonstrations and shortlisting, all the way towards producing an executive report outlining our top three recommended solutions for their particular business challenge, ready to go to their decision makers to act upon. So if you have a technology selection

project coming up, please go to legaltechnologyhub.com slash advisory to learn how we can help.

Greg Lambert (01:18)
All right, well, sounds super useful. Thanks for putting a spotlight on that, Sam.

Sam Moore (01:21)
Thank you.

Marlene Gebauer (01:30)
Welcome to The Geek in Review, the podcast focused on innovative and creative ideas in the legal industry. I’m Marlene Gabauer.

Greg Lambert (01:36)
And I’m Greg Lambert and this week we are thrilled to welcome a guest who represents a really critical pivot point in the generative AI narrative. we is the co-founder and CEO of Givance Givance. Givance, thank you. Yeah.

Marlene Gebauer (01:51)
Givance. I want to know where that came from. Sorry, go ahead.

Jiyun (01:55)
Yes.

Should I answer that or?

Greg Lambert (01:57)
So the

company is focused on moving legal AI beyond the hype cycle of tech summarization and is into the emerging area of agentic verification. And what I’m really excited to talk about, which is the high fidelity visualization.

Marlene Gebauer (02:14)
So Jiyun’s background is distinct. It’s rooted in academic robotics and clinical mental health, which is great for us in the long, AI, which gives him a unique lens on the risks and safety requirements in legal tech. Givance focuses on ⁓ closing what he calls the last 10 % gap. So the chasm between AI that can plausibly chat and one that can reliably visualize complex evidence for high stakes litigation.

Greg Lambert (02:21)
Yeah.

Marlene Gebauer (02:43)
Jiyun Hyo welcome to the Geek in Review.

Jiyun (02:45)
Yeah, thank you. Thank you Marlene and Greg for having me here.

Greg Lambert (02:49)
Yeah,

so well, I guess before we begin, let’s answer Marlene’s question. so Givance

Jiyun (02:55)
I see. I’ll keep it short. So when we first started, we were actually a nonprofit startup ⁓ selling to nonprofits to automate their fundraising process. And so we came up with this name because it’s Give and Events. So we are Give and Events and Givance But now we are pivoted into legal tech and we just kept the name.

Marlene Gebauer (03:14)
I like it. It’s very, it’s different. It’s different. And I like it.

Greg Lambert (03:15)
Yeah.

Jiyun your background is really fascinating and I know it spans a bit over a decade, but you didn’t start in the law firm arena. You actually started in the general robotics lab at Duke, training what was in the AI agents to supervise other agents. And then you moved into mental health, building chatbots that need to be both empathetic but also

know, is really, really important in the mental health area as well.

Jiyun (03:52)
Yes.

Greg Lambert (03:53)
So I’m a lawyer or I’m a legal professional, what’s going to be my needs that I have or my pain points that I’m going to want to reach out to a Givance

to help me solve. let me kind of just back up and let you talk about Big Picture, what you’re doing.

Jiyun (04:08)
Listen.

Yeah.

So the most of the solutions that I see are kind of automating some different like kind of workflows or kind of summarizing the internal knowledge that you guys already have into kind of readable or digestible format. So that’s kind of most of the platforms that I see. But then ⁓ I’ve talked to like lawyers who’ve used these platforms on a day-to-day basis and they’ve told me that like these tools have become basically like a set of tools that they kind of access here and there but then it doesn’t really they don’t really

It

doesn’t really replace their entire workflow. So for example, if I have like five documents I have to read, I would just go upload the documents, ask some questions, and then come back because they know that other features that these platforms offer are not 100%. So they know they’re going to have to go back and check everything. So they stay away from those features. so for us, what we want to do is like, and also like when you summarize these things and when you’re digesting these things.

I mean you’re summarizing 10,000 documents into maybe like you know two, three, four pages. But then two, three, four pages is still a lot for some people to digest. Especially if you’re kind of outlining all the events in a timeline. So you see all these numbers, these dates, and you just can’t really see ⁓ clearly like who was here at what time and for how long. So for us what we do is we turn your internal knowledge into digestible visuals. So for example like

stuff like Gantt charts to show all the executive managements like who was here from 2017 to 2025 and you know what kind of role were they playing so we could color code everything and then it’s all traceable back to the document so it’s like we see ourselves as like I guess the next second generation AI the first generation is always text to text and second generation is you know text to speech text to action text to visuals and like you know if you think about all

humans are like existing on earth. 80 % of us are visual learners, but then most of our knowledge is stored as text and that’s what lawyers are forced to work with. And if I could see why that’s the case, because creating this visualization is really, really time consuming and before the advent of AI, people have to hire other people to do it, which means you would have to wait like a week or two weeks before something gets sent back to you and it just doesn’t really work that

way. now that we have AI who can produce these visuals, instead of waiting for a week, you could wait a minute and then get these interactive visuals that will help you understand this. So we’re just at the start of text-to-visual AI coming into LegalTech.

Marlene Gebauer (06:50)
Yeah. So you’ve written about the danger of, of outcome supervised AI and mental health. So the bot learns to sound like a doctor, but lacks clinical reasoning, which, you know, can lead to dangerous hallucinations. how does that specific risk translate to the legal field? So are we currently seeing outcome supervised legal tools that sound really confident, but are legally hallucinating a bit?

Jiyun (07:15)
Yeah, so just to kind of give you some context, the foundation models are trained using the outcome-based supervision method. So fundamentally speaking, all the AI will kind of try to sound confident and it will hallucinate. So I don’t think there’s a 100 % foundation model that doesn’t hallucinate. So I kind found out about that in a hard way when I was working on the mental health AI chatbot, because we had all these amazing

things, all these products that we built, some of them would hallucinate and that would kind of turn off all the mental health professionals and it would also some in some other cases not for our company but have created like lawsuits and lots of problems and so I’ve kind of you know seen firsthand how much impact that could have on ⁓ everyday people and the users so I kind of carry that over from the mental health field and when I first kind of got into the legal tech field like three weeks ago

I started seeing all these other products that answers questions based on documents or evidence. And I see that most of them are, of course, using the foundation models that are trained using the outcome-based supervision. And for example, I saw some companies that, let’s see here, maybe would draft some motions or summarize some evidence or answer questions based on the legal documents that are available. And depending on the retrieval method,

They definitely hallucinate and a lot of people have to kind of spend their time going through all the answers and that kind of kills the whole point of everything and and I’m going to talk more about like some of the other retrieval methods that I think Like a lot of the vendors should pay attention to and there’s also a reason why like the vendors haven’t paid attention to this such as you know, like the traditional rag is really cheap and it’s good for some purposes but like like some of the high fidelity like

high risk in the situations they would hallucinate and yeah, like I don’t want to name specific products because you know do really have lots of respect for all these products, but yeah So I’ve seen heard offline off the record many instances where they’ve had to kind of you know Check everything again, and that kind of kills the whole point of using these software

Greg Lambert (09:31)
Yeah.

Marlene Gebauer (09:31)
Yeah, I mean, I think it’s

good to sort of understand what, what other options are available. So that’s great.

Greg Lambert (09:37)
Yeah, and just to, I’ve been talking about this, about the reward system and my new story that I tell people is like, think of taking a pop quiz on a topic that you may not be all that familiar with. You weren’t rewarded by saying, I don’t know what the answer is to this. You were rewarded by trying to create an answer that you thought might actually sound plausible.

And so I tell this like when you use a closed foundational model that’s only working off the information that it has and you’re asking us something that’s outside of its expertise, then it won’t tell you it doesn’t know traditionally. It was rewarded by coming up with a plausible answer, whether that answer was… Yeah, because I would rather give it a shot and possibly make a D or a C on the grade than…

Marlene Gebauer (10:22)
I’m going to get, I’m going to give it a shot. I’m going to give it a shot.

Jiyun (10:25)
Yeah, exactly.

Mm-hmm. Yes, absolutely, yes.

Greg Lambert (10:31)
then it’s a zero.

So we talked about these outcome base and with your robotics research that you’ve done, you focused on these hierarchical agents on the process and then checking as the processes are done. So as we move more to say an agentic AI when it comes to legal, to research and some of the workflows that we have,

Jiyun (10:45)
Mm-hmm.

Yes.

Greg Lambert (10:58)
Now, where do you see the agents, or we’re saying the results maybe that we rely upon these agents to do a lot of the negotiations or, you know, and conduct things autonomously, but do we need some kind of intermediary supervisory agent layer like you talked about with the robotics?

Jiyun (11:22)
Yeah, absolutely. and it’s a really good thing that I see ⁓ like many products are kind of moving towards that direction already So I would say yes supervision layer is absolutely necessary and not just like you know after the fact that you know Let’s say if there’s an agent that tells you some flags Like that’s it during the negotiation like you want to know as the negotiations are going kind of going on and not just like after ⁓ seven days of negotiation and after all these tests have been done

Hey, here are some like, you know flags that you know could have popped up So what I’ve been kind of saying is that as the AIs are kind of negotiating contracts and stuff They would could have live tell you flag it to the manager So having like a human involved is really really crucial. But at the same time Like having a system where AI can notify them as soon as something is Let’s say, you we have this stuff we call confidence level, you know drops below certain confidence level then it should be able to tell

the person, the person manager, like what’s going on and it should have like a really easy way for the person to check whatever that means. So yeah, I would absolutely say that supervision agent layer is necessary and it is being implemented in many different cases. But I also think that, yeah, I’ve seen many cases where they like, they want to implement something like this and then that kind of kills the accuracy and in the end they’re like, okay, so what did we do? Like, you know,

It kind of kills the user experience. I get flagged way too many times and it kind of kills the point. So where is this balance? So I think that’s something that I’m pretty interested in figuring out.

Marlene Gebauer (12:59)
So you described Givance mission as closing the last 10 % gap. You’re arguing that today’s best foundation models can really only reliably go to about 90 % quality, which I’m thinking, wow, that’s pretty good. But I guess it’s not. Whether it’s text generation or chart generation. So can you define that last 10 % for our listeners? ⁓ Why is generating like a visual timeline of a conspiracy so much harder?

Jiyun (13:22)
Yeah.

Greg Lambert (13:27)
Okay.

Marlene Gebauer (13:28)
for an LLM than just summarizing the emails involved.

Jiyun (13:32)
Yeah, so this kind of carries over from the outcomes based kind of training of the foundation models. So I’ve built in many different verticals and I’ve seen the same pattern emerge over and over again where the AI agents could produce something that looks really plausible, but then it might have like one or two mistakes here and there.

Greg Lambert (13:42)
Okay.

Jiyun (13:50)
because of that one or two mistakes you can’t really use it. So for example if you’re generating a chart, but you like everything about the chart. It looks really amazing colors are amazing but then there’s like a little overlap in the text and like the text looks kind of ugly you know, maybe one data point is Wrong for some reason and then you want it to kind of make precise iteration of the chart So hey tell the chat GPT or whatever AI agent you’re using

just fix that one little part and what the AI is trained to do is it’s trained to kind of produce everything from scratch again. So it’s really hard to do precise iteration. like you got to that 90%. So for example, if I like give another example for coding, if I vibe code something, it looks really great. But then let’s say this one button doesn’t work and I’m like, hey, can you move this button slightly to the left or can you make it slightly bigger or, you know, change the color? And sometimes

Sometimes it does it right, but other times it completely messes up all the other aspects of the website. That’s what I’ve seen across all these verticals. If you’re not using it to charge your clients, I think it’s okay. then it’s not going to produce something that’s 100%. You can just, without even checking everything, give it to your clients and they’ll be happy. There’s always going to be that little thing that doesn’t ⁓ quite match what you want.

like

iterating based off of that is really, really impossible for the current models. Yeah.

And then, like, you know, the same goes for the visual timeline of a conspiracy that you just mentioned. Because like for the conspiracy of a timeline, like you need to kind of outline all the people that are involved, all these events are involved, all these evidence that you collected, emails, like that petitions, all these different things. And you can’t even miss one. If you miss one, then this whole kind of like the trustability of this whole ⁓ evidence to be created is putting the question and, and it’s not, it’s not going to be admissible.

Greg Lambert (15:16)
⁓ Okay.

Jiyun (15:44)
And so yeah, like you just lose the trust in the output. So that’s why like making having all these layers to ensure like 100 % accuracy is really really important, but it’s really really slow and it’s really really expensive and that’s I think why a lot of the vendors can stay away from it.

Marlene Gebauer (16:02)
I was just thinking about like when you vibe code, it’s like, had to make a radar screen and then I had it, you know, in four, in four sections. And I was like, all right, make it in five sections. And then like the whole thing just goes crazy. Like the, know, all of the lines are outside of the circle and it’s like, okay, this is not helpful. So I just had to laugh when you were saying that.

Jiyun (16:17)
yes, absolutely.

Yes.

Greg Lambert (16:22)
I know Marlene’s a big visual person. She’s done some presentations on the visuals of data, so right up our alley. let’s talk about visual clarity. ⁓ You use, I was doing some research, you use something called D3.js for these publication quality visualizations.

Marlene Gebauer (16:31)
Yeah, absolutely.

Jiyun (16:37)
Mm-hmm.

Yes.

Greg Lambert (16:46)
So, in eDiscovery, we’ve had things like concept clusters for a decade. So how does your approach there, Givance the visualizations, differ from what we’re used to? Is it about exploring the data, or is it more about proving the case? What’s the reasoning?

Jiyun (17:00)
Mm-hmm.

Yeah, so those are really good questions. So the way we see this is it’s actually both So it’s both exploring the data and proving the case So I can only speak from our current use cases all the law firms that we have using Gibbons So what they do is they want to kind of so all their like clients give them all these data Deputations and all these evidence they supply and like they basically have to make sense of all everything so before they had to have

paralegals going to go through and extract the key dates and kind of maybe construct these charts for them. But now they’re using Givance to do that instead of like relying on these paralegals. And because of that, they can better understand the case. So they can kind of come up better come up with the legal theory of as to how to kind of defend this case. And then what our law firm also does is they also take this visualizations to the clients and then ask the clients to kind of, hey, what do you think about this visual?

Greg Lambert (17:51)
you

Jiyun (18:06)
Is there anything missing and like it improves the communication between the law firm and the client and also what? Like our client law firm is doing is they’re using the visualizations that are generated actually in court To better explain this thing to the judges and the juries so he’s told us the partner at the law firm has told us that a lot of the times the judges are drawing their own diagrams as he’s describing the evidence and the juries are sitting there like

having no idea about what it is. And once he describes this evidence, he gives a binder full of evidence, like 40 pages of documents to the jury. And then the jury’s like, OK, I don’t know what’s going on. And that’s also been a huge problem for him in the past. But then instead of doing that, he’s now presenting these visuals at the court using a presentation. And that’s helped the jury and the judges better understand the case.

Greg Lambert (18:35)
you

Thank you.

Jiyun (19:04)
So that’s kind of like what we think is different from for us like better understand your internal knowledge whether it’s evidence whether it’s like discovery process whether it’s like &A for example ⁓ you know so we whatever internal data you have we could visualize it and you could better understand it

Marlene Gebauer (19:21)
So you’ve applied your engine to analyze IRS data from Korean-American nonprofits. And I guess this is where Givance started. So that’s ⁓ handling over $127 million in budget analysis. So what did that project teach you about the messiness of real-world ⁓ structured data now that you’re applying it to litigation discovery?

Jiyun (19:28)
Yes. Yes.

Greg Lambert (19:34)
So.

Jiyun (19:43)
Yeah.

I see. So I thought this was a very interesting question because like…

I actually thought that the non-profit data was much easier to analyze because I’ve dealt with some crazy, know, messy data, which is just what it is. Like, so any data that we use to work, we work with in the real world, it’s always messy. People have different formats, even if, so for example, like, know, ⁓ if I’m analyzing a PDF, like this organization has this format, that organization has that format. And then for definitions, you people talk in different, like, style.

Greg Lambert (19:56)
You’re probably right.

Jiyun (20:20)
They use different diction, different tones, and it’s just impossible to expect the data set to be clean and consistent. So we’re really used to dealing with a lot of data with lots of noise. And my co-founder also used to work at Google Research and Snapchat, where he used to analyze billions of usages of Snapchat. And of course, everybody’s using it differently. Everybody’s having it.

together, data being messy is expected. That’s why it takes skill to clean the data and all of that. I didn’t really learn anything new per se, but I do think that we could apply whatever we learned analyzing all these different data from different verticals to analyze the litigation discovery. Actually, legal data, in my opinion, much easier to analyze than

some of the other ones. like some like many cases are kind of similar to each other in some sense. So like let’s say you’re dealing with commercial litigation. Of course the specific evidence are different but it’s always like audio files, it’s always PDFs, it’s always website screenshots. So it’s the same type of format from the perspective of a computer like an AI analyzing it. So yeah.

Marlene Gebauer (21:16)
Really?

Greg Lambert (21:40)
Okay, well, we can’t have an episode where we’re talking about products without doing the first thing that Marlene and I have to do, which is clear security concerns. So I know that at Givance, you’ve talked a lot about or emphasize isolated environments and encryption and your security. And I’ve read where you’ve been critical of open AI and kind of the black box.

Jiyun (21:52)
Yes.

Greg Lambert (22:10)
A fact that you’re not quite sure how things are going. So, for Givance, are you relying upon disclosed foundational models or you building some type of bespoke tool or smaller models? What is it that you’re doing in these isolated environments?

Jiyun (22:30)
Yeah, mean security is really really important and I think what I realized is that each law firm has their own security policy and so we of course adhere to all that whether it’s SOC 2, whether it’s GDPR, whatever you want to what the law firm wants us to do, we can do. But to answer your question about the foundation models, so we do use foundation models ⁓ and the reason is that

These foundation models, let’s face the truth, these are improving so fast to the point where if you train your own smaller models, it might outperform these models maybe for the next three months. And then a new model gets released and you have to train another one. It’s not reliable to go that way. So instead of doing that, what we do is we ⁓ try to come up with these architecture. ⁓

Greg Lambert (22:58)
Yeah.

Jiyun (23:19)
know,

agentic systems have many models that are making up this entire agentic system. So we make it pretty model agnostic. and we, whenever we have to kind of, use the foundation models, we remove all the PII’s remove all these, you know, like sensitive materials. And for us specifically, we’re only using it to produce these deep three charts. So like, just to kind of give you like a little idea about our system, we have a planner agent that plans, Hey,

What things to visualize or what kind of things to add to like you know x-axis y-axis colors different types of charts and then we also have a coding agent that actually codes this thing and then When we actually send a request to code this we remove everything that’s you know like and from their perspective someone is just trying to make a know fancy chart They don’t know what it is for so that’s what it is and we also have another model called Revere agent which is a VLM agent which is a visual

language model agent that looks at the previous chart and then kind of tells the coding agent what to fix. And so this is the piece that I talked about, what the foundation models are missing, where it lets you do precise iteration. So we use foundation models because, I mean, obviously, billions of dollars are going into it. And of course, it’s going to improve and we can’t really compete against that. But we don’t want that. mean, we don’t want any sensitive information going into these foundation models.

So I think that’s kind of our approach to the callus models.

Marlene Gebauer (24:48)
Yeah.

I think you teed me up nicely for the next question. you know, your, your platform claims to visualize contradictions instantly. So that’s very bold, but we’re going to hear about it. you know, technically like, how do you distinguish between a subtle, subtle nuance in testimony and, a hard lie, you know, that to me, that just seems like a massive challenge.

Jiyun (25:10)
Yes, that’s a really good question. So I have to think about this for a little bit. So just to clarify, it’s not instant. It’s like maybe three, four minutes. But that’s the startup language that I use. Because we analyze 10,000 files. It’s every time.

Marlene Gebauer (25:24)
The fact that we couldn’t

do it before at all, you know, I think this is, this is pretty good.

Greg Lambert (25:26)
Yeah.

Jiyun (25:27)
It was gonna take like weeks before.

⁓ So it’s relatively instant. like what we do is we don’t try to make judgment. Like our AI is not making legal theory about like a case. So what our goal is is that ⁓ we try to make the visuals and make every part of the visual traceable back to the specific evidence that we could have used it to construct the chart. So for example, like, yeah, so like let’s say

construct the charts and then the lawyer who’s going to be viewing these charts would be able to make these legal judgments based on that. So for example, if someone says, if they’re like, you know, timestamp mismatch for your emails or location mismatch or inconsistent sequences of actions, then that’s clearly a contradiction. And, you know, for our AI, we could confidently tell you, hey, this person said this here, but then our evidence, you know, we found this evidence from other place and that’s a contradiction. So that’s like 100%.

But then for the neon, the answers that you talked about Usually doesn’t like well like you know it’s like we’re now of course hundred percent confident when we are kind of judging stuff about that But it doesn’t really break the people involved or the timestamp or the the actions when they happened so we would give you like some I guess confidence level and the lawyer would have to make the decision based off of What what we give them so yeah

We’re not trying to have the AI make the decision as to, this person’s lying, or that person is telling the truth. We’re just saying, here are the clear contradictions. It doesn’t matter what they said. This is just clearly contradictory. And there are other ones where, OK, these are kind of gray area. We want the human to check. So that’s kind of our approach to this problem.

Greg Lambert (26:55)
Thank

Well, let’s bring it back to the kind of the human in the loop when it comes to legal tech adoption. So you’ve written about what you call the innovation paradox where the startups might have this great agility, the workforce itself that they have may lack the GNI training to kind of execute on those ideas.

For say a law firm innovation leaders listening to this podcast or presenting this podcast, we often feel a paradox as well. So, we look at buying tools and then we have to train the associates or lawyers or whoever to prompt them effectively, to use them effectively.

What do you think the missing skill sets of lawyers are right now in order that they need to use a tool like Givins? Are we talking about just data literacy? Are we talking about prompts? Or what is it that we’re lacking?

Jiyun (28:15)
Yeah, sense. ⁓ So I can speak for Givance, for example. So ⁓ I think the biggest hurdle to using Givance for a lot of our users is knowing what kind of visualizations are possible, because they’re just used to clustering, like just simple bar charts, line charts, pie charts. And then they’re like, are there any other charts other than this? But then we show them like Gantt charts, and they’re like, OK, I see. We show them like Senki charts. They’re like, OK, I see. And if you look at it,

Like

when I was exploring the D3 library that we just talked about, are thousands of visualizations that could be possible. of course, not saying law firms should use all of them, but then they should definitely explore at least 10 or 20 of them because it’s so easy to kind of see, for example, when during the discovery phase, when a lawyer submits all their evidence discovery materials and the other party submits all their materials, they want to

see which ones overlap, which ones only we submitted and which ones only they submitted. And stuff like this is very easily visible using interactive Venn diagrams. It’s not specifically Venn diagrams, but there are other visuals that could delineate these things easily. So I think the biggest hurdle is knowing what kind of data can be visualized into what kind of charts. And that part maybe

So we’re thinking about a few different solutions to this. Maybe give them some training, give them some guidelines to this. Or we train another AI model that reads their intent and then gives suggestions as to how they can make these visuals.

Greg Lambert (29:47)
⁓ As long as you.

Marlene Gebauer (29:52)
Yeah.

Greg Lambert (29:55)
Are you

finding that the AI models understand the data or can pick the 10 best examples or are they just picking 10 random?

Marlene Gebauer (30:07)
Like what’s the best visual for what you’re trying

to accomplish, you know?

Jiyun (30:11)
Yeah,

so if the goal is clear, then the AI model is relatively good at suggesting what the visual chart is. For example, if you want to I just want to see which executives were at this company from 2017 to 2025, then our AI model will definitely know, hey, look, then I suggest a Gantt chart.

And we could kind of side by side compare everyone. But you’re just kind of like, you give us 10,000 files, you’re like, hey, so what are the top 10 visualizations we could do? And the RAI is like, we don’t know. What’s your goal? So if your goal is clear and what your objective is clear, then our AI can definitely help you identify ⁓ which visuals are the best for you.

Marlene Gebauer (30:55)
That’s good. You have recently raised 600,000 pre-seed, uh, just about a month ago, right? Yeah. So congratulations. Um, but you know, there are some players in this space that are raising like, you know, a hundred million rounds. And so does this. What? Yeah. Yeah. That too. That too. Does.

Jiyun (31:03)
Yes, that’s right.

Thanks

Greg Lambert (31:15)
900 million runs.

Jiyun (31:16)
Yeah

Marlene Gebauer (31:21)
You know, but does, does operating, you know, at this scale give you, you know, freedom, you know, more freedom to prioritize say, you know, accuracy over, you know, more hyper growth or does it make competing with sort of the, massive, know, these massive sales engines more challenging.

Jiyun (31:38)
Yeah, I see. So this is a question that I get asked pretty much in every vertical because like in any vertical there’s massive companies raising massive amount of money or incumbents like kind of always operating in that space. the way I kind of see this is that legal tech is kind of special from my perspective because like legal tech for startups didn’t really exist until two or three years ago from my perspective. Like a lot of law firms are not open to trying out new AI tools. But because

These early players that you know, we’ve kind of mentioned how they raised millions of dollars They were the ones who went there and went in and could have expanded the market for us So I see kind of like I know the gold rush the term has like a negative connotation but there’s a gold rush into legal tech from all the startup founders and These companies are from my perspective for paving the way for us so they’re spending all these marketing money and then getting the law firms to could open up to AI tools and ⁓ and I

hear that they’re signing like one to two years contracts and all these law firms are also new to adopting AI tools so they will try it and then I would say in the end like they would people I mean it’s just as any other vertical penetration is really good but then you need it like the winner all comes down to the technical accuracy like the quality of the product at the end

Marlene Gebauer (32:58)
does

it execute, you know?

Jiyun (32:59)
Yes.

And so I’m not too concerned about like, I actually like it, like the fact that they’re spending most of their money on marketing because, you know, they’re basically making all these law firms, you know, more open to adopting and trying new AI tools. And the natural progression basically is, okay, so I hear about these two tools, what are some other options? And if you’re not completely happy, you might be kind of shopping for other tools and that’s kind of opens up a window for the other firms to kind of jump in. And I’ll

also mentioned this earlier where I do feel like the first legal tools that are raising hundreds, millions of rounds, those are like the generation one legal tools, AI tools. So that would kind pave way for the second generation tools, like multi-model tools to kind of jump in and feel the gap because I do think that text to text doesn’t feel all the gaps that exist in the law firms. So I don’t really see them as competition. And even if I classify

Greg Lambert (33:36)
Thank

Jiyun (33:54)
I like seeing competition because it just means that the market is growing and if you look at the multiples at which they’re like raising I don’t want to mention specific names but they’re raising it like 80 times their revenue 40 times their revenue which is a crazy multiple if you think about it compared to other verticals. Yeah so but that basically tells me.

Greg Lambert (34:09)
Yeah.

Marlene Gebauer (34:11)
You’re not the only one that’s saying that.

Greg Lambert (34:14)
I’m sure it will all work out in the end.

Marlene Gebauer (34:16)
Yeah.

Jiyun (34:17)
Yeah, so

you know if that’s the case then it also tells me that the market is going to grow and I love how you know these firms are spending all their money doing the work for us so we could you focus on the tech stuff.

Greg Lambert (34:31)
Yeah,

sounds like a good business model. So before we get to the crystal ball question, I’m really kind of excited to hear your answer for the crystal ball question because I think you’ve got some good insights. But one of the things that we’ve been asking our guests is, what resources do you use to kind of keep up with changes in the market, changes to what you’re doing? Is there any go-to?

Jiyun (34:34)
Hahaha

Marlene Gebauer (34:34)
Mm-hmm.

Greg Lambert (34:55)
sources that you have.

Jiyun (34:57)
Mmm.

I see. So most of my knowledge about AI and where the AI is headed and the latest technology is actually coming from my own network. So my co-founder went through IC. So we have the YC network that we can tap into. I’ve been a part of many accelerators and there’s many like discussions about, I’ve tried this, I’ve tried that. And then I also was part of the research. I mean, I was a researcher at a lab at Duke. So I also followed their work and research papers that gets published.

And at the same time like we also have top engineers from like XAI, OpenAI, MetaAI, Google who’s invested in us as angel investors So like we have communications with them. We can just ask them Hey, like what are you working on and you know, what are you gonna release and like of course they can’t tell me everything but they could give me some pointers and The reason I do this is when I look at all the publicly available information like most of them are just outdated a little bit so

If I just follow that, then I will be like four or five months behind what the actual people are working on at these top AI labs. My co-founder is really well connected and he used to be in big tech for nine years. All the people he used to manage are now working at deep Google, deep mind, XAI, OpenAI, Anthropic, and they’re interested in joining us. He has this clear communication line between him and these people. That’s how we get most of our…

information regarding AI.

Greg Lambert (36:27)
It’s still about personal relationships, isn’t it?

Jiyun (36:29)
Yeah, absolutely.

Marlene Gebauer (36:32)
Yeah,

Jiyun (36:33)
And I do have a sanity check that I always run, because I’m like, am I just being delusional? Am I also falling behind? But we also recently got an acquisition offer from MetaAI. So we know, OK, MetaAI is a really big firm. And we have clear communication line between us and them. And they think that we know what we’re talking about. OK, so I guess that kind of validation also helps it out.

Greg Lambert (36:35)
you

Marlene Gebauer (36:57)
Exactly.

So, ⁓ this is the crystal ball question. and we ask everybody this. So, when you look at the crystal ball for 2026 and beyond, do you see the search bar disappearing from e-discovery? You know, will the future interface be more of a conversation with the visual agent or will we always need that, that search box?

Jiyun (37:22)
I see.

Greg Lambert (37:23)
Or Andrea, however you want.

Marlene Gebauer (37:25)
can just say that’s a stupid question and let me,

and let me answer something else, or you can answer this one.

Jiyun (37:29)
Yeah, because the

only thing I have about this question is because of the word beyond because like for me beyond is like we’re talking about like because I think that AI would Just like change our direction for the next 10,000 years 20,000 years, know, of course, you’re not like referring to that Of course, but I think in the short term, I don’t think the search bar is going to disappear I search by is there because people use it and people find it like I don’t know like people associate like control When they see search bar

I have control of what I want to search and they’re used to it. So even if something better comes along, like if you’re, say, ⁓

Most people are not early adopters. Even if something better is available, you’re just kind of used to working ⁓ based on your old workflow. Unless all those people retire and they’re no longer in the industry, which is maybe in 20 years, I wouldn’t say the search bar is going to disappear. But maybe, if once a few generations down, I do think that search bar will disappear.

Marlene Gebauer (38:32)
I mean, we’re still writers. so that’s sort of what I’m thinking. I mean, you still have that, that, that box where you have, you can type something in, you know, in a written format.

Greg Lambert (38:34)
Yeah.

Yeah,

Jiyun (38:41)
Yes.

Greg Lambert (38:42)
forgive forgive my my I don’t know if it’s going to be a pun or just a bad use of language here. But do you mind? Can I know you’re big on the visualization? Just what do you what’s a what’s a really basic example of that that you think is going to be different from today to work maybe a year or two from now?

Jiyun (39:04)
Sorry, couldn’t really hear your question.

Marlene Gebauer (39:07)
He

was, he was asking, you know, was there sort of a simple, ⁓ example of a visualization that, know, you think is going to sort of change things or impact things. Did I get that right, Greg? Okay.

Jiyun (39:10)
Mm-hmm.

Greg Lambert (39:19)
Yeah, yeah, thanks.

Jiyun (39:21)
So in, okay, with respect to this question, right?

Marlene Gebauer (39:25)
Mm-hmm.

Yes.

Jiyun (39:27)
I mean, a lot of other verticals are experimenting with visual agents that you can talk to. Some people say they like it, but then I personally don’t really like it because listening to me is much slower than actually reading. I read a lot faster than I listen, so unless I can speed up everything. Also, the thing is that I don’t really want to be talking to a virtual character when I’m searching for stuff. Who are you?

You

know, like I don’t want like a gatekeeper. Yes.

Marlene Gebauer (39:58)
It’s a little weird.

Greg Lambert (40:00)
I guess let me rephrase it a little bit

then. I’m talking much more like on outputs. So, you know, it’s a lot of the information we deal with is text-based. And I know that there’s a big push on that. So what do you think are going to be some of the simple examples that we’ll see with AI tools and outputs with visuals in a year from now that we’re not seeing now?

Jiyun (40:07)
Okay.

Yes. ⁓

⁓ so…

Maybe a year from now, a lot of visuals that the consumers would see or everyday people would see. I think it’s going to really start from just a very specific use cases within a specific vertical. And the reason I say this is that there are a lot of AI tools that produce presentations for you. That’s really quickly. And that’s having a massive impact on pretty much a lot of the consumers

We’re used to producing presentations and for the D3 though, like it’s something that we’re starting with But it’s not something that everyday person is gonna use Like when’s the last time you produce like a D3 in a diagram? So only businesses maybe if you’re in the legal tech, I would say that you would start to see a lot more Yeah, like just like more impressive visualizations are interactive and and so like I would say Gantt chart will be you pretty

common among all the law firms. think SEND key charts are really useful for divorces when people want to divide up their assets. instead of reading a two page report, you will see a SEND key chart of, this asset goes to that person. This asset goes to that person. So you can drag and drop different things, and that’s how you would divide up the assets. So I would see specific use cases, but in order for this to catch up and

actually be mainstream, I would say it would take about five years or so ⁓ because I can see this like being applied to education as well in business developments as well and yeah so like right now it’s I would say a year from now like I don’t think it’s gonna be enough for like a normal person outside of the legal tech to notice.

Greg Lambert (41:56)
Okay.

All right, well,

Jian Hiao from Givance. It’s been a pleasure. We love talking visuals, so thanks for coming on the Week in the Review.

Jiyun (42:16)
Yes.

Yes, thank you. Yes.

Marlene Gebauer (42:24)
Yeah, thank you.

And thanks to all of you, our listeners for taking the time to listen to the geek and review podcast. If you enjoyed the show, please share it with a colleague. We’d love to hear from you on LinkedIn and tick tock.

Greg Lambert (42:37)
And for listeners who want to learn more about Givance, where’s the best place for them to go?

Jiyun (42:43)
So you can just contact me on LinkedIn. I’m pretty active on LinkedIn. I can also give you my email and my website as well.

Greg Lambert (42:50)
We’ll put all that on the show notes.

Jiyun (42:51)
Yes,

thank you.

Marlene Gebauer (42:53)
And as always, the music you hear is from Jerry David DeCicca Thank you very much, Jerry.