Isha Marathe, a tech reporter for American Lawyer Media, joined the podcast to discuss her recent article on how deep fake technology is coming to litigation and whether the legal system is prepared. Deep fakes are hyper-realistic images, videos or audio created using artificial intelligence to manipulate or generate fake content. They are easy and inexpensive to create but difficult to detect. Marathe believes deep fakes have the potential to severely impact the integrity of evidence and the trial process if the legal system is unprepared.

E-discovery professionals are on the front lines of detecting deep fakes used as evidence, according to Marathe. However, they currently only have limited tools and methods to authenticate digital evidence and determine if it is real or AI-generated. Marathe argues judges and lawyers also need to be heavily educated on the latest developments in deep fake technology in order to counter their use in court. Regulations, laws and advanced detection technology are still lacking but urgently needed.

Marathe predicts that in the next two to five years, deep fakes will significantly start to affect litigation and pose risks to the judicial process if key players are unprepared. States will likely pass a patchwork of laws to regulate AI-generated images. Sophisticated detection software will emerge but will not be equally available in all courts, raising issues of equity and access to justice.

The two recent cases where parties claimed evidence as deep fakes highlight the issues at stake but did not dramatically alter the trial outcomes. However, as deep fake technology continues to rapidly advance, it may soon be weaponized to generate highly compelling and persuasive fake evidence that could dupe both legal professionals and jurors. Once seen, such imagery can be hard to ignore, even if proven to be false or AI-generated.

Marathe argues that addressing and adapting to the rise of deep fakes will require a multi-pronged solution: education, technology tools, regulations and policy changes. But progress on all fronts is slow while threats escalate quickly. Deep fakes pose an alarm for legal professionals and the public, dragging the legal system as a whole into an era of “post-truth.” Trust in the integrity of evidence and trial outcomes could be at stake. Overall, it was an informative if sobering discussion on the state of the legal system’s preparedness for inevitable collisions with deep fake technology.

Links

Listen on mobile platforms:  Apple Podcasts |  Spotify

Contact Us:

Twitter: ⁠⁠⁠⁠@gebauerm⁠⁠⁠⁠, or ⁠⁠⁠⁠@glambert⁠⁠⁠⁠
Voicemail: 713-487-7821
Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠

⁠⁠Transcript

Marlene Gebauer 0:07
Welcome to The Geek in Review. The podcast focused on innovative and creative ideas in the legal profession. I’m Marlene Gebauer.

Greg Lambert 0:14
And I’m Greg Lambert. So Marlene, this is our five year podcast anniversary anniversary. Yay. So I just, I just didn’t want that to go pass us by without saying something. Because I think it’s been, you know, just a fantastic five years. That’s great. And, you know, it really feels like this year, we’ve almost started with all the generative AI and advancements in technology that’s been going on this year. It’s true. I

Marlene Gebauer 0:42
mean, you know, we’ve always had great interviews, and but this is sort of a whole new area that that is new to everybody. And so I think that makes it even more exciting.

Greg Lambert 0:52
It’s been fun because work. I sound like the expert.

Marlene Gebauer 0:57
It’s like everybody’s an expert now. Right. You know, and as the as you noted, Greg, we have had a lot of episodes devoted to generative AI tools, you know, what they can do, how to address risk and using them. Today, though, we’re delving into a darker area, one where AI is used to purposely fool the viewer and impact decision making in court. So welcome to the world of deep fakes at court, are you Are we ready? Say yes.

Greg Lambert 1:28
Yes, yes.

Marlene Gebauer 1:30
Our guest today has been speculating on that question. Isha Marathe a legal tech reporter for American lawyer media recently wrote an article. Deep fakes are coming to a court, our judges and lawyers ready, where she delves into the implications on litigation. Isha Welcome to The Geek in Review.

Isha Marathe 1:49
Thank you so much for having me. I’m excited to talk about this. Yeah, we are two very interesting. Yeah, both

Greg Lambert 1:57
are ghosts. This is our ghost story episode. So I’ll say yes. To underlay it, so well Isha. Before we kind of dive into the article, let’s back up a little bit and just talk about, you know, what exactly are is a deep fake. And I know that we, you know, I think we’ve seen a lot of these on, you know, social media, and we probably heard of some other darker recesses of the web that these have shown up, but can you kind of just give us an overview of it?

Isha Marathe 2:31
Yep. So I’m a deep fake. I mean, it’s like in the word, it’s just a blend of deep and fake. And the deep comes from deep learning, which is a form of machine learning. So that’s like the AI bit of it. And then the fake is more obvious. It’s comes from the fact that this technology is used to create an image, which usually incorporates the likeness of another person. And it does so like in a really convincing and a really good way. So I really see a deep fake as like a really, really good, super realistic Photoshop situation.

Marlene Gebauer 3:12
How are they created? I’m very curious to know that, because you mentioned this is a really, really good Photoshop, and you know, like, I’ll show my kids pictures, and they can tell me right away, it’s like, oh, that’s been shot. And I can’t tell I can’t tell. But they can tell. But apparently, these are so good that even maybe they can’t tell. So like, how are they created?

Isha Marathe 3:33
They are they’re really good. And they’re getting better. I almost want to say like every two weeks, oh, my god, like I know, only like two months ago, they couldn’t really create hands too well, or there were like too many teeth in people’s mouths. And like now they’re like, wait, the hands look great. And the teeth look great. So it’s really, it’s, yeah, they’re really good Photoshop to deal with. I shouldn’t even say Photoshop to you. I just say that, because that’s the only kind of barometer to equate it with. But they’re made very differently. It’s not a cut and paste situation. There’s many ways of actually making deep fakes. And back in the day, and by that, I mean, pre the democratization of generative AI, which has happened over the past few months, pre that era, there was this like kind of complicated and expensive and lengthy process where you needed high end computers, you know, which allowed you you have to use like an encoder decoder algorithm where two people’s faces were run through two encoder decoder pairs. The encoders kind of learned the latent facial features of these people during the training. And then at the end of the process, the decoders were swapped so that the image kind of was decoded to produce the facial features of the other person. And it this was like sounds complicated. It was complicated and It took so long because you had to repeat this process over and over, like hundreds of times for each frame of a video or to train the algorithm sufficiently, you know, to create a believable image, or deep fake. And that’s just one example. There’s like, there were tons of ways to make these kinds of time consuming and Complex, deep fakes. But that, of course changed. After we kind of open AI came on the scene with, you know, the first tool was ChatGPT, which doesn’t create a deep fake, but it kind of democratized generative AI, as I said, so now we have like stability AI and mid journey and mid journey is famous slash infamous for the Pope and the puffer jacket, deep fakes,

Marlene Gebauer 5:48
which we all love to, you know, it’s like, that’s the thing. Yeah.

Isha Marathe 5:52
Though, it’s so great. And, you know, there’s lensa, which we saw, I think it was like on everyone’s Instagram story at one point, and it kind of like looked cool, and, you know, look just like people’s faces. And there were all these cool backgrounds that looked realistic. And anyway, like this thing that’s kind of just like Mima fide and like made fun and viral has kind of changed this lengthy, expensive process that needed a lot of skill and good equipment, into something that a beginner can now do, just by clicking a button and create these super realistic images. You can feed a text prompt, which is what I think the pope in the puffer jacket situation was, or you can feed images into some of these tools. And the software can be found super easily. It’s on GitHub, which is an open source development community. So it’s really approaching and click of a button technology.

Greg Lambert 6:49
So you know, so much easier than it used to.

Marlene Gebauer 6:53
It really is so, you know, yeah, you know, we’re talking about the Pope and the puffer. And, you know, that was fun and amusing. Everyone had a laugh, but, you know, the Pope and a puffer aside, say that quickly? Why, you know, you reported that deep fakes, you know, are sort of creeping into litigation. And, you know, why are deep fakes, so dangerous in that context.

Isha Marathe 7:18
So there are really two ways for deep fakes to come into litigation as of right now. And one of them is a deep fake at the center of a case, say it’s a defamation case, say the Pope was like, I didn’t like that, you know, or like, whatever. But the other, which I think, right now, to me, scarier is a deep fake coming in as evidence for a case that has nothing to do with a deep fake. And that second possibility is the one that we talked a lot about with her really great sources for that piece that you’re talking about. And I think that it poses the most challenges right now. Because without the education, without the technology, and without the regulations in place to tackle this technology that is really evolving at breakneck speed, and is suddenly like in the hands of everyone, the legal system doesn’t have too many ways to stop it, to understand it from entering trials and courtrooms and even even coming in front of a jury, which is the most scary, and I’m sure, you know, we’ll get into that part more later. But it’s, I don’t know, it’s really a challenge to deal with evidence that just is not what it looks like, or sounds like upon first instance. And I think, to me, this really like this technology really dragged the legal industry into, like a similar post truth era that we found politics to be in 2015 2016. That’s scary to me.

Marlene Gebauer 8:52
I mean, I’m kind of wondering, like, you know, are people who deal with evidence? Are they ready for this? Are they aware of this? And how are they going to deal with it?

Isha Marathe 9:02
Um, okay, so we, so there’s many people who could technically deal with it, there’s, like, different

Marlene Gebauer 9:11
and maybe we should start with, like, who deals with it, maybe just sort of answered that call, right. And we can go into sort of what to do later. The very

Isha Marathe 9:19
first line of defense, according to our reporting, so far has been that it would be ediscovery professionals, because not only are they likely to be the first ones to catch something amiss, but they also probably have the most tech education. My colleague Cassandra acquire, was also on the tech desk actually wrote a really great piece about the three lines of defense that ediscovery professionals have right now. When it comes to tackling deep fakes and, you know, they go the first line is the metadata, which may give ediscovery folks enough information, even before review, just by kind of trying to track the lifecycle. of how the data has been flowing, that’s connected to an image that may be a deep fake or not. The second line of defense is the forensics, which is actually analyzing the image, running it through software, or doing a semantics analysis. And then the third line of defense is the one that’s like the most forward looking and I think extends beyond legal, which is waiting for good tools to come to market that are going to detect AI generated output. So yeah, that’s that’s the that’s the discovery portion of it. I think.

Greg Lambert 10:34
Are you seeing any examples that are already entering the court? And are there particular types of cases that are more, I guess, vulnerable to this?

Isha Marathe 10:45
So um, I think that, you know, we’ve seen, we’ve seen too, as of late May, in terms of where a party said that are claimed that the evidence that was presented was a deep fake, and one was in a Tesla lawsuit involving an image of Elon Musk. And another one was related to the Gen six riots, involving former President Barack Obama and Trump, oh, sorry, involving former President Donald Trump. And both judges determined that the evidence was, in fact, a deep fake. And it came up and it kind of went, and it wasn’t, it didn’t cause too much of an effect on the case itself. So we haven’t seen anything that so far has, like altered a case dramatically. But for people who talk to you, they’re like, even these two cases are enough to signal something in the pipeline.

Marlene Gebauer 11:47
So you know, you you quite rightly mentioned that, you know, eDiscovery, folks are sort of the first line of defense in order to identify deep fakes. Are there any others? You know, before things get to court? And then once it gets to court, you know, how do they make this determination? And you know, how successful you know, are these methods to try and shine the light on deep fakes.

Isha Marathe 12:16
So a lot of the questions around how successful

Marlene Gebauer 12:21
the methods do we even know?

Isha Marathe 12:24
All right, yeah, it’s just early days for that. And I wish I knew, but yeah, it is early days for that. But, you know, to me, it’s kind of a lot of it falls on counsel and, and then being educated enough to inform the bench about the technology that’s coming in. And Lee Tedric, who I actually interviewed for the article, she does workshops with judges, where she has, I believe, like monthly sessions to just educate them about the basics of this technology. It’s at the education part kind of falls on everyone. But right now, if we’re talking before even coming into court, I think ediscovery experts would be like, the key player here.

Greg Lambert 13:14
Yeah. Well, I mean, they’ve been the key word or, you know, fake documents, or Yeah, documents that have been altered. So it would make sense. But I know, Professor Ted rich had talked about the amount of education that the judicial judiciary needs for this. But, you know, I’m hoping that it doesn’t cost any money because I don’t have any money. So how, I mean, is education going to be enough? I guess,

Marlene Gebauer 13:46
now, it’s not going to be free.

Isha Marathe 13:50
And you know, it’s not going to be consistent and equitable. I remember when I started reporting on lion tech, and a source said to me, if you know, one cord, you know, one cord, and that’s just, like, super true. So I think that’s going to be very true here. You know, we saw over the pandemic, that there were some courts who were super prepared, you know, they already had cloud integrations and video conferencing technology, and their judges knew how to use Zoom. And then there were other folks who were like taping cameras to the ceiling trying to do just like some type of a video conferencing situation. And I think that’s how this is going to be, you know, the, the tech that’s coming up to detect AI generated images is super new. The education is super sparse right now. Because, you know, who even knows what’s going on, including the people who are making the technology? So, yeah, it’s kind of I wish I had a better answer for that other than, no, it’s just gonna be rough.

Marlene Gebauer 14:53
Yeah, I mean, because I know Professor Diedrich was mentioning that, you know, judges should have have, you know, trusted technical experts on ham, who can authenticate? And as you know, Greg very rightly pointed out, you know, the courts don’t have financial or administrative resources to hire these experts. And I just, I wonder how we’re going to deal with it? Well, yeah, I

Greg Lambert 15:19
think he should already said, we’re not, we’re not. Yeah, no, I’m

Marlene Gebauer 15:24
just hoping if there’s another answer. I don’t I don’t know

Isha Marathe 15:29
that like all hope is lost. I don’t mean to say that. I just think in terms of the education, I just think it’ll be slow. But there are tools in the in the toolkit, currently, even for judges and, you know, one of the one of the tools that comes up often is the Federal Rules of Evidence series, 901, and 902, which gives the court a rubric for dealing with computer generated evidence. So there are things in there you know, there are there are little pieces that judges can rely on. And some people, you know, who we spoke to believe that that’s enough. And they believe that a widget is a widget to quote, one of my sources, and it shouldn’t make a difference if the deep fake or if it’s a text message, or a Slack message or an emoji, you know, it’s all at the end of the day, the same. There are people who believe that, yeah, yeah. Yep. So there are there are these rules,

Marlene Gebauer 16:26
going back to sort of how this is going to impact the courts? You know, what about the existing backlog? Will having to deal with the deep fakes further delay litigation, and you know, how much more pressure will litigants have to settle or arbitrate or mitigate?

Isha Marathe 16:42
I personally do not know about the effect on settlements. And I, I wonder, again, if that is, there’s not been enough cases to know. Yet. However, it is super likely that deep fakes will complicate litigation. And, you know, it’ll just because of the situation where there’s a lack of education, lack of blanket comprehensive education around this technology, and also a lack of technical experts, for judges, you know, what is going to happen is that both parties just have their own experts if they can afford them, and they’re going to be expenses, which brings up a lot of access to justice issues. But it’s going to be expensive. And it’s going to lead to what folks called Battle of experts, where there’s just an image of Person A meeting person be, one party says the image is fake. And, you know, without universal watermarking standards, and without a court that’s equipped to handle the situation, each side is just going to bring its own expert, and they’re just going to duel it out in this confusing and expensive battle of experts. So I think,

Greg Lambert 17:55
well, and obviously, you know, the, if someone purposefully sends in fake evidence, to a case that there would be repercussions for that, if caught. And so it’s a very high risky move to make. Do you know, I know. Mr. Schwartz from from New York just got fined $5,000. Have you seen any, any kind of actions or sanctions so far? Because like the courts just came out with with new rules on you gotta tell us if you’re using generative AI, are there any has anyone jumped on this about you had to verify that these are not deep fakes?

Isha Marathe 18:42
I do not know of courts or judges, specifically who had said this yet, from the reporting that I’ve done until the end of May. But I believe there are regulations coming out of Texas, Washington, and California, the California law might actually have sunsetted, earlier this year, but there are state laws coming up, kind of like data privacy regulations, which we often call a patchwork, because they’re slightly conflicting. And and, you know, in the absence of Federal Regulations, of course, that’s what we have. So I think that’s what we have right now when it comes to any authentication, compliance for AI image generators. But nothing I know of for a court, in particular.

Marlene Gebauer 19:33
So if a deep fake does get past safeguards, you know, if it gets in front of a jury, you know, even if it’s discovered and stricken, you know, what sort of impact do you think that has, you know, do you think jurors can just walk away from what they’ve already seen? I mean, images can be very compelling.

Isha Marathe 19:51
Yeah, there’s really kind of a debate here among our sources for the story about deep fakes and chord at least and half of the folks who we talked to say, well, a jury is almost like sacrosanct in a way where you can’t even question they’re, they’re just a cog in the machine. And we just have to trust what they’re doing. And that’s just how it’s always been. And you can’t really get into their psychology too much hear around deep fakes, but then there’s others. And this is a part that I’m more interested in, who say that a jury, at the end of the day is just a set of people and shaking off that first impression of an image or a video, just because a judge says, oh, disregard that, you know, it’s not really realistic. And taking that a step further, if either side is then dissatisfied about the jury’s ability to unwind their knee jerk presumptions about a fake image or video, it could lead to calls for Miss trials, again, lengthy legal battles. So I do think, you know, the potential to influence a jury here is really alarming.

Greg Lambert 21:02
So Isha, the, you know, the, we’ve talked about, you know, the Pope and the puffer jacket, and really kind of images that are being used as are deep fakes are being submitted as evidence, but there’s other types of evidence that could also be manipulated through such as video recordings that are introduced or even witness testimony or transcripts. So, you know, what steps do you think can be done on these things as well?

Isha Marathe 21:32
I really think that starting education, or Hurley and I know, that’s always like an easy, just like an easy thing to say, because you can’t really touch it. It’s not so tangible, but I do think starting education early for judges, you know, having mandatory kind of education courses, or CLE credits around, this is the way to go. Not just for judges, but for counsel for ediscovery professionals as well, you know, just flagging that, hey, like this is coming. We might see things we might hear things that aren’t what they seem to be. And just keep that in mind. Aside from technology,

Greg Lambert 22:12
yeah, those are courts incorporate. So I was gonna ask is if do you think the technology for example, you know, we’ve talked about people using AI and submitting AI for class classwork, you know, for homework, I imagined being being used for trials as well. But, you know, there’s systems out there that supposedly that you can, you can pay something in and they can tell you, you know, this is probably 80% Chance written by AI. Is the technology to catch deep fakes, at least kind of keeping a close runner up to the advancements in creating the deep fakes.

Isha Marathe 22:56
Yeah, that’s really a good, that’s a good question that I don’t know the answer to. I’ve heard both things. I’ve heard things, you know, folks saying no, there’s a lot of interest it to create this technology. And then I’ve heard folks saying that it’s really there’s not enough money being put there.

Greg Lambert 23:15
That’s how people want to spend their money.

Isha Marathe 23:18
That’s right. That’s right. And I don’t know, like, honestly, which side is true? I, but I think there are solutions. Certainly, there’s like, there’s sensitivity multigo ZeroFOX, like, among among several solutions that claim to be able to identify deep fakes. But until there are enough of them, and until there’s enough kind of causes to do this analysis. I think it’s going to be hard to say, though, a lot of what I know is just experts kind of looking at an image and then saying like, there was most recently that video that Florida Governor Ron DeSantis, posted of, or his campaign, I apologize. On Twitter posted of it was like a mishmash of pictures of Donald Trump. And one of the pictures was Donald Trump hugging Dr. Fauci. And, you know, that one has since been kind of debunked as a as a deep fake. But then experts came out and they said, Oh, his collar doesn’t line up, and his buttons are a bit weird. And his like, neck doesn’t look real. But on first instance, it kind of looks real. Yeah. And that’s really the concern, right? Like, it’s, you know, by the point people see it, it’s already too late. And that’s, that’s really, that’s really the concern that technology.

Marlene Gebauer 24:40
Yeah. And I keep going back to this, this idea of like, how it’s going to impact the credibility of evidence, and, you know, is this going to force a change in in sort of the rules of evidence, you know, how, how does this proliferation of deep fakes you know, how is that going to impact act burden of proof for legal professionals and standards of admissibility for evidence in court. I mean, do you have any thoughts on that?

Isha Marathe 25:08
You know, so far, what we’ve heard is that it’s super unlikely for the rules themselves to change what is more likely is, so I’m looking at the fre 901 and 902 series. And you know, which basically says that the evidence produced must be, quote, sufficient to produce to support a finding that the item is what the proponent claims it is, and quote, you know, in the pre deepfake era, like circumstantial evidence was often enough to determine the authenticity of something because courts wanted to avoid the lengthy trials, which involved, you know, expensive forensic experts for every step, and, you know, like affidavits for every authentication. But I think now, the impact might just be that attorneys have to rely on forensic experts more than then they have. And you know, it’s just going to be these, again, expensive kind of practices that were shed by courts that are just going to come back. And I think that’s just going to impact people who really can’t afford can’t afford trials that go like that. And then, you know, that’s who I really worry about the most.

Marlene Gebauer 26:24
So this has been just a fascinating and scary conversation. And so, yeah, there is a great burden on on legal professionals here to educate themselves. So how can they, you know, most effectively educate themselves and sort of stay updated on the latest developments in deep fake technology, to you know, better navigate its impact on litigation and evidence? Um, no easy answer, I’m sure. Yeah,

Isha Marathe 26:55
there’s many ways. From my perspective, I’m always like, you know, read the right publications, or check out the scholarship that’s coming out. And there is some really interesting scholarship coming out about this. Off the top of my head, Rebecca Delfino wrote a great paper about deep fakes in trials and seek out workshops, seminars, talk

Marlene Gebauer 27:17
to your ediscovery people, you know,

Isha Marathe 27:19
yeah, talk to your ediscovery people talk to your IT department

Greg Lambert 27:22
Read Isha’s articles.

Marlene Gebauer 27:24
Yeah,exactly. All of them all the time.

Isha Marathe 27:29
And I would also say, recruit and hire the right people, you know, whether they’re tech experts for like judges back pockets, or for counsel, or just an ediscovery departments? i There’s no, I wish there was one answer to that. I would love a single answer. But there isn’t. I don’t think that,

Greg Lambert 27:46
well, you know, we work in an industry that loves looking back at things and what kind of precedent is out there? So I know, I know, deep fakes are kind of the new new rage. But you know, before that, there was Photoshop, before that there was, you know, photo manipulation and evidence manipulation? Do you know if there’s any legal precedent? Or if there’s landmark cases out there, where this type of either deep fake now or what would have been, you know, essentially a deep fake, you know, two years before that, kind of significantly changed the outcome of a case? And are there any lessons that this industry can can look back on and, and kind of apply to what we’re what we’re approaching

Isha Marathe 28:36
I definitely don’t know, of, of any cases like that about deep fakes as as the technology we’re talking about today. But when it comes to the past, the only thing that comes to mind, I think, and I don’t even know if this directly connects, but I think about, you know, like we’ve talked a lot about deep fakes in elections and politics, but the place where litigation most is going to come as defamation and my opinion, and not just in the US, but also internationally, which is what we’ve reported. And, sadly, the majority of deep fakes are seen in revenge porn and porn that is using, you know, faces of celebrities in videos. And there was a case that actually struck down I’m trying to try to look it up. But it was a federal child pornography law, I believe, that ruled that the First Amendment, you know, protects the graphic manipulation of images, even if it appears that you know, it’s it’s minors who are engaging in these activities because it’s not It’s generated, you know, synthetically, and it’s not real and nobody was harmed during the making of it. And that’s the only thing that comes to mind how it’s going to directly connect to deep fakes, I don’t know, except to say that, you know, there’s really strong First Amendment protections for defamation and why litigation might go up is really unlikely to, to kind of succeed. You know, because it’s hard to prove, and there’s a lot of protection for creativity and parody and for synthetically produced materials, and I think and political speech than political speech. That’s right. So I don’t know if that was a bit of a stretch. But that’s all that really came to mind.

Marlene Gebauer 30:48
So, you know, we we’ve talked about more having more education, we’ve we’ve talked about different types of technology that can perhaps spot deep fakes, what other measures? Or are there any other measures that should be taken to, you know, adapt to, you know, deep fake technology that’s impacting eDiscovery, and the overall judicial process? You know, are there any things that that need to change? I mean, you know, apparently the rules don’t need all the experts are saying the rules don’t need to change. But you know, what does

Isha Marathe 31:20
You know, there’s two things. One is really keeping an eye on the AI detection technology that’s coming out and incorporating it as quickly as you can. And by you, I mean, courts and ediscovery professionals. And the other thing is really, and this is the slower process, but if there was a federal kind of federal statute that would make AI image generation companies watermark their output. That could be another thing that would be super helpful in this fight against deep fakes. I. But yeah, that’s what I think other than education,

Marlene Gebauer 32:03
and triple damages.

Greg Lambert 32:05
Yeah. Yeah, I was gonna say come down hard on the first first one that gets caught. So Isha, we’re at the point now, where we ask all of our guests, our crystal ball questions. So we’re going to ask you to pull out your crystal ball and peer into the future for us. And so what do you see happening with a deep fakes and its effects on the legal system, say over the next two to five years,

Isha Marathe 32:30
Hmmm, let me think about this. So I think I really think that we haven’t seen deep fakes affect litigation to a significant extent yet. But the dam is going to break I think, and, you know, courts and counsel and juries and ediscovery professionals, if they’re unprepared, I think it has a real risk to to the judicial process. That’s one prediction. And at the same time, I think that like data privacy statutes, we are going to see a patchwork of conflicting, but a patchwork nonetheless of laws that that regulate AI generated images and audio and video. And I also think we will have sophisticated detection software. I really do. But I think that the system is inequitable, and so some courts will have it and some courts won’t.

Marlene Gebauer 33:28
Well, I’ll leave it on that. Isha Marathi tech reporter with ALM Thank you very much for taking the time to share insights on the effects of deep fakes with us.

Isha Marathe 33:37
I had such a good time talking to you guys. Thank you so much for having me. And of course, it’s

Greg Lambert 33:41
still scared though.

Isha Marathe 33:44
I know I hate like I always, you know, I always used to watch these Read, read news articles and be like, Oh, I hate the alarmist. But now, I fear I sound like

Marlene Gebauer 33:55
it seems like every day is just another alarm that’s going off. So, but thank you for sharing this with us and educating us in this area, of course. And of course, thanks to you, our audience for taking the time to listen to The Geek in Review podcast. If you enjoy the show, share it with a colleague. We’d love to hear from you. So reach out to us on social media. I can be found at @gebauerm on Twitter,

Greg Lambert 34:17
And I can be reached @glambert on Twitter Isha if someone wants to reach out to you where’s the best place?

Isha Marathe 34:24
It’s also on Twitter at @IMarathe

Marlene Gebauer 34:29
e and listeners. You can also leave us a voicemail on our Deacon review Hotline at 713-487-7821. And as always, the music you hear is from Jerry David DeCicca Thank you, Jerry.

Greg Lambert 34:42
Thanks, Jerry. All right tomorrow and Happy anniversary.

Marlene Gebauer 34:44
Thank you. You too.