This week, we sit down with two guests from Anthropic, Matt Samuels, Senior Product Counsel, and Den Delimarsky, a core maintainer of the Model Context Protocol, or MCP. Together, they unpack why MCP is drawing so much attention across the legal industry and why some are calling it the USB-C for AI. For law firms long burdened by disconnected systems, scattered data, and the infamous integration tax, MCP offers a shared framework for connecting models to the places where real work and real knowledge live, from iManage and Slack to email, data lakes, and internal tools.

Den explains that the promise of MCP is not tied to one model or one vendor. Instead, it creates a standardized way for AI tools to securely interact with many different systems without forcing organizations to build one-off integrations every time they want to connect a new source. The conversation gets especially relevant for legal listeners when Greg and Marlene press on issues like permissions, ethical walls, least-privilege access, and auditability. The answer from Anthropic is reassuring. MCP is built to work with familiar enterprise security concepts such as OAuth and role-based access, meaning firms do not have to throw out their security model in order to explore new AI workflows.

Matt brings the legal and operational lens, translating MCP into practical use cases for lawyers, legal ops teams, and security leaders. He describes how AI becomes far more useful once it has access to the systems lawyers already rely on every day, while still operating within carefully defined administrative controls. The discussion highlights a key shift in how firms should think about AI. This is no longer about asking a chatbot a clever question and getting a polished paragraph back. With MCP, firms are moving toward systems where AI can retrieve, correlate, summarize, draft, and support actions across multiple platforms, all while staying inside the guardrails set by the organization.

The episode also explores how MCP fits into the rise of agentic workflows, apps, plugins, and skills. Rather than treating AI as a static assistant, Anthropic describes a future where these tools become active participants in legal work, pulling together information from multiple sources, helping assemble case timelines, drafting notes into a shared document, and supporting lawyers in a far more integrated workspace. The conversation around skills is especially useful for firms thinking about standard operating procedures, preferred drafting styles, escalation rules, and repeatable work product. Skills and MCP do different jobs, but together they start to look like the operating system for structured legal workflows.

By the end of the conversation, one message comes through clearly. The legal profession is still early in this shift, but the pace is picking up fast. Both Matt and Den encourage listeners to stop treating these tools like abstract future concepts and start experimenting with them now. At the same time, they offer an important note of caution. As much as these systems promise speed and efficiency, lawyers still need to protect the craft of lawyering, their judgment, and the human choices that matter most. For firms trying to make sense of where AI is headed next, this episode offers a grounded and practical look at the infrastructure layer that could shape what comes next.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | Substack

[Special Thanks to Legal Technology Hub for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠
Transcript

Greg Lambert (00:00)
Hi, everybody. Greg Lambert from The Geek in Review, and I am here with our friend Nikki Shaver from Legal Technology Hub. Nikki, you have a new event series coming out called the Demo Dozen. Tell us more about that.

Nikki (00:15)
That’s right. Hi, Greg. Hi, everyone. Actually, we held our first one of these on February 19, hosted by Stephanie Wilkins. You’re right, it’s called our Demo Dozen event. At these new LTH events, which are free to attend, by the way, you can log in virtually to see 12 curated providers offer demos of their products.

It’s an easy way to get up to date on what’s new in the market, whether that is brand-new products or existing ones with new features. We’ll be offering these throughout the year, and our next one is coming up on May 19. So if you’re a vendor listening and would like to participate in upcoming Demo Dozen events, let us know at LegalTechnologyHub.com, or reach out to me or Chris Ford on LinkedIn. If you’re a buyer or other participant in the market and you’d like to learn more or attend one of these events, make sure you follow our page on LinkedIn. It’s at LegalTechHub. Or if you follow along at LegalTechnologyHub.com, we have an Events dropdown on our page, and if you follow that, you’ll see registration links for all of our upcoming events, including Demo Dozen. So keep an eye out for that. It’s a great way to know what’s what in the market.

Greg Lambert (01:30)
And having these compact demonstrations really helps people like me evaluate things quickly. So thanks for doing this.

Nikki (01:40)
Thanks, Greg.

Marlene Gebauer (01:49)
Welcome to The Geek in Review, the podcast focused on innovative and creative ideas in the legal industry. I’m Marlene Gebauer.

Greg Lambert (01:56)
And I’m Greg Lambert. Marlene, for years we’ve talked about the integration tax, which is that massive headache that law firms face trying to get all of our different vertical software systems to actually talk to each other. It’s been a pain for years for us. We’ve all been there.

Marlene Gebauer (02:14)
Yeah, I’ve been the victim of that tax.

Greg Lambert (02:17)
Yes. And with the explosion of AI, that problem got worse. But maybe, in the news, we have finally reached that USB-C moment for the legal tech stack. We’re going to talk about it today. So we’re joined by two leaders from Anthropic who are at the center of this new open standard called Model Context Protocol, or MCP.

Greg Lambert (02:46)
And our good friend and multiple-time podcast guest Pablo Arredondo, who recently stepped away from his role there at Thomson Reuters, connected us with a couple of folks from Anthropic to discuss this topic. We’re really excited. We have with us Matt Samuels, who’s a Senior Product Counsel at Anthropic. Matt focuses on the intersection of agentic innovation and enterprise legal needs, helping bridge the gap…

Marlene Gebauer (03:03)
Thank you, Pablo.

Greg Lambert (03:16)
…between complex technical infrastructure and the high security, safety, and confidentiality requirements of the legal profession. Sounds like my job.

Matt Samuels (03:25)
We try, we try.

Marlene Gebauer (03:27)
And we’re also joined by Den Delimarsky, a member of the technical staff at Anthropic and one of the core maintainers of the MCP project, so he knows all the secret sauce. Den is the architect helping define how AI models can perform real-world actions through standardized, composable tools. Matt and Den, welcome to The Geek in Review. So happy to have you.

Greg Lambert (03:50)
Welcome, guys.

Matt Samuels (03:51)
Glad to be here.

Den Delimarsky (03:52)
Thank you for having us.

Greg Lambert (03:54)
All right. So, Den, we’ve heard, like Marlene described in the intro, that MCP is described as the USB-C for AI. As someone who is still using an iPhone 14 with the Lightning plug-in, what does that actually mean for a law firm that has data scattered across iManage and Slack and LexisNexis and all these other verticals? What does that mean for us?

Marlene Gebauer (04:11)
Guilty.

Den Delimarsky (04:28)
Yeah, for sure. By the way, don’t be ashamed of the iPhone. I recently upgraded from an iPhone 10 to a 17, so it took me a while. There was a forcing function for me to upgrade, but now it’s all USB-C.

But anyway, one of the big challenges folks have when they use AI is, how do you make it useful with real data that usually lives inside a wide variety of silos, right? A lot of the models on the market are trained on a specific corpus of data, and that data is usually publicly available data. It’s on the internet. It’s not specific to your organization. It’s not specific to your practice. That makes a ton of sense because a lot of this data is confidential, private, and only for specific workloads.

But people still want to leverage the power of AI to make sense of it, right? If you have specific spreadsheets, or any specific documents that help you outline future cases, how do you get models to universally access that data? And like you said, the big challenge here is that there is a variety of these sources, structured in all sorts of ways. Slack works in its own way. LexisNexis structures data in its own way. So if you would typically tell people, well, you have to write bespoke solutions, different applications or APIs, that’s a lot of work. It’s a lot of heavy lifting.

So this is where MCP, or Model Context Protocol, comes in. It essentially creates a universal set of rails that says, if you want to use data with AI models, here’s how to do it. It’s a very opinionated protocol. It means if you want to do things like return results, here’s how you do it. If you want to do things like securely access the data and handle authentication and authorization, here’s how you do it. That opinionated piece might seem a little weird for some people because they’re like, well, my data has all sorts of different conventions. But the forcing function of saying you have to follow this universal, USB-C-like adapter protocol means you can now connect it to any client and it works just as well.

So I can plug an MCP server into Claude, and it works just as well if I plug it into another MCP client that also supports the protocol, because the protocol itself is open source. It’s a massive gateway for enabling data access to any model in a way that doesn’t force people to constantly rewrite custom bespoke solutions just to access a specific data source.

Greg Lambert (07:15)
And the models themselves aren’t created for iManage, which has all of our document management documents in them. It would be iManage that creates what allows you to access that. Am I understanding that right? So technically the security could be set up on their end to respect, to know who I am and respect what I can and can’t get into.

Den Delimarsky (07:43)
Right, right, absolutely. And I think there’s obviously a good number of official MCP servers that anybody can create. In your case, you might have iManage creating an MCP server, Slack has its own MCP server. You would also be able to compose them. That’s another underrated superpower of MCP, composability. You’re not just seeing one data source, but you can go into, for example, Claude, and if you connect several MCP servers, say Slack and iManage, you can ask the LLM, you know what, I’m reviewing a case for X, Y, and Z scenario. Can you get me all the conversations that happened and then correlate them with data in iManage?

The LLM would then query the data from both sources, and you, as somebody who cares about the case and not the underlying pipeline, don’t care whether the data behind it is JSON or XML or whatever else. You essentially have that universal pipe that gets the data to your LLM, to Claude, and then it can process the data in a way that Claude expects. So you’re reducing the burden on yourself.

Ultimately, yes, there are official MCP servers, but we also see people build MCP servers for their own scenarios. They have a bunch of tooling in house that they’ve built over years and years of establishing their practice, and they come in and say, you know what, I want to make sense of this data with AI. How do I do this? Because the SDK is open source, because it’s an open protocol, anybody can write an MCP server. They can use Claude Code to write their custom MCP server, and there you go. You can use an official one, but you can also build your own.

Marlene Gebauer (09:25)
So Greg gave an example of iManage. Could you also do something similar if you have a data lake at your organization?

Den Delimarsky (09:34)
Yeah, absolutely. The MCP protocol itself doesn’t restrict what you can do. Think of it as a universal abstraction. On one end, you have USB-C. On the other end of that cable can be anything. It can be USB, DisplayPort, Thunderbolt, anything. If we follow the same analogy of adapters, you have one opinionated end, but behind it you can have any data store, any application, any API. You can talk to some remote server. You can talk to an in-house server. You can talk to an application that runs on your machine. You’re not forced into a specific mode of operation here. The expectation is simply that the way you connect it to other MCP clients is universal.

Marlene Gebauer (10:21)
So Matt, you and your colleagues spend a lot of time talking to law firm chief information security officers. What’s the translation that you find yourself doing most often when explaining what MCP servers are and why they’re valuable?

Matt Samuels (10:37)
Yeah, we use CISO as the short way to put it, but I don’t know if that’s universally well known. Chief Information Security Officers. Also, in my legal role, I’m often talking to lawyers or folks working to get a deal done at the enterprise level. I think Den did a much better job describing a version of the conversation I’m having.

At the 10,000-foot level, what we’re trying to do is give Claude access to your enterprise data so you can make it more useful. In a law firm setting, you mentioned a lot of the tools already. iManage, SharePoint, maybe Gmail or Outlook for email. I like to give a couple of handy examples of things you might not have imagined you could do with AI because you weren’t familiar with these tooling options.

You can have it draft an email. Then, if you’re the CISO, you might not want it to send the email without a human reviewing it.

Greg Lambert (11:33)
I can tell you they do not want you to send the email.

Matt Samuels (11:37)
Exactly, exactly. At least speaking of some of the servers we’ve seen in the email space, they have what are called tools. As the enterprise, you want to understand what tools are available through the server. They might have draft email and send email as two different tools. So it comes down to the admin setting rules and using standard concepts like RBAC and least privilege issues. Which user groups and roles can have access to what tools? Then training your employee base on how to use these tools in a responsible way.

MCP itself is a pretty abstract protocol, but on a client, not like a law firm client, but an AI client like Claude, we try to package these up into things called connectors or plugins or things we can talk about. We try to make it easy for the enterprise to set thoughtful rules around this in a way that makes sense for them.

Marlene Gebauer (12:34)
I was going to ask, how hard is it to do?

Matt Samuels (12:37)
It’s not super hard. Especially on the admin side, we have pages where you can click on each individual item. First of all, you can decide in the first instance, do I want to connect to the official Slack MCP, or do I want to connect to Greg Lambert’s MCP server, which calls Slack? I bet 99 out of 100 CISOs are going to say, no, use the official one.

And then even within that, you can specify which tools can be used. At least within Anthropic, you can set rules within the tool. So it can be always allow, never allow, sometimes allow. There will probably be more variations on this over time, but we do a lot of work thinking through what the admin experience is like. We think of ourselves as an enterprise-first company. So a lot of the time and thought goes into what permissions a CISO or the admin of a data science team would want in place. It’s point-and-click type stuff.

Greg Lambert (13:40)
I can hear my security ops people saying, but is it auditable? Can I see who’s doing what? Is that something you think about with these tools?

Matt Samuels (13:53)
We definitely do. At a high level, if the admin said, hey, these folks can’t use this tool, they shouldn’t be. And I would hope there’s a way to look into debugging that sort of issue. But then we do have things like a compliance API. Claude itself, as one of the clients, and I’m sure this is true for all of them, when they invoke these servers as part of a prompt, like the one Den asked, search through Slack to find me all relevant information and then populate iManage, the model should tell you in the conversation itself when it called the tool and what it did with it. Then afterward, you could look at the chat and see what happened, and you can use things like the compliance API to do it at scale.

Greg Lambert (14:47)
In our prep call, and this kind of expands on the question I asked earlier, we talked about whether the AI respects my document permissions. Den, do you mind walking us through how MCP uses OAuth 2.1 to ensure that if I’m not allowed in a folder in the DMS, the AI isn’t allowed in there either? How do I go about explaining that?

Den Delimarsky (15:21)
Yeah, of course. One of the things we’ve done from the very beginning with the protocol is make sure it respects a lot of the security boundaries that are already well established in the ecosystem. If you look at MCP through a purely pragmatic security lens, a lot of the concepts are familiar to basically anybody who’s been building applications dealing with user authentication and authorization for the past two decades.

It snaps to OAuth, which is a universal standard, again, like MCP. It’s not universal in an absolute sense, but the vast majority of services use OAuth in some capacity. You can use OAuth to gate access to your MCP server the same way you would gate access to any website or API or any other resource.

What that enables is when you connect an MCP server that accesses private data, whether that’s your case management system or Slack, you’re not going to add it and instantly have access to every conversation, document, and piece of information that exists there. The server itself identifies that, hold on, before you called me, like what Matt described in the tool-calling scenario, before you call a tool the server can say, hold on a second, you did not provide me a credential by which I know you can access this document. Can you please log in?

What happens then is a flow kicks off, usually through your browser or sometimes in-app for some MCP clients, that’s going to ask for your credentials. Those credentials are standard to the service you’re accessing. So if you’re authenticating with Slack, you’re probably going to get a Slack authentication prompt. This is where you can log in with your Google credentials, your email, go through the standard flow of two-factor auth, whether you use a passkey or a security key like a YubiKey.

A lot of this lives outside the boundaries of MCP itself. MCP does not dictate how you do 2FA. It doesn’t talk about exactly what the format of the credential or token is. It doesn’t care, as long as you’re using OAuth.

Once you go through the OAuth flow, your client, in this example Claude, gets a token. A token is basically a string that represents the user. It says Greg and Marlene are part of this group, they have access to X, Y, and Z resources. Then the client takes that token, and when it invokes a tool, for example get Slack conversation, it sends that token as well and says, by the way, when you ask for this conversation, this is Marlene asking, here’s her token. Then you get conversations back that only Marlene can access.

You as Greg are going to log in with your credentials, and you’re going to have access to the resources you have access to. Also, MCP servers themselves, as Matt called out, can have many different tools. I might have get Slack conversation, I might have send DM, I might have read channel. Some of them are accessible to everybody. If you can log in, you’re a valid user and can use the tool. In some cases, some tools require additional scopes and say, no, before you can access this tool, you need this additional permission.

So for example, if we’re talking about legal cases where Greg said, this is only for me, nobody else on my team can access it, and Marlene logs in with her credentials, even though she authenticated and went through the authorization flow, that doesn’t automatically mean she gets access to the same tool and the same data from that tool that Greg had. A lot of these conventions are traditional OAuth. Nothing here is MCP-specific. This is the implementation of an open protocol that gives us OAuth, and MCP leans on that as the foundational piece for many of these security requirements for MCP servers.

Marlene Gebauer (19:35)
Yeah, this is interesting. It syncs up nicely with need-to-know requirements in law firms. I was also thinking about ethical walls and how you protect from the wrong people getting access to the wrong materials.

You’ve talked about MCP registries. In a law firm environment, how does a centralized IT group use a registry to prevent a lawyer from setting up their own server and potentially leaking privileged data or introducing security risks? Or disgruntled employees or anything like that. That never happens. What are you talking about?

Greg Lambert (20:13)
We would never have lawyers do that. Never.

Matt Samuels (20:22)
Yeah, it’s a great question. I think it comes back to what we were chatting about in my previous answer, which is making sure the AI clients, Claude, ChatGPT, Gemini, you name it, spend a lot of time thinking about what the admin experience is and what the default settings are.

If you’re a thoughtful enterprise, and I think this is the norm at most places, the admin should say, hey, I need to approve any net-new MCP server the organization wants to use. They effectively have a dashboard where they can toggle on and off whether the server can be used at all. Over time, that’ll probably get even better. There will be even more granular permissions. You might think of groups or individuals within groups and what tools they use.

The way I think of it, we were talking about least privilege, was kind of the concept you were talking about, Marlene. A lot of this comes down to basic enterprise security and IT practices. They apply pretty neatly, at least on a client like Claude, and I assume most of the other major clients. They’re thoughtful about what the baseline is for what you want your organization to do at the admin level. Then each individual person, maybe I’m not a Slack user, but it’s approved within Anthropic, so I’m not going to enable that tool. Or there may be more risk-forward enterprises that say, actually, I’m okay with a certain group of users playing around with what we call custom MCP servers in Claude.

At the end of the day, it comes back to a level of risk tolerance and then communicating to your employee base, hey, we’ve now added the iManage server. It’s default enabled for maybe the entire law firm, or only a couple of folks in the law firm while it’s in a testing period. I think that’s the way we think about it here.

Marlene Gebauer (22:15)
Okay. So I have another one for you. We’ve seen reports of Rio Tinto’s legal team connecting an LLM to 4.5 million iManage documents via MCP in a few days. Matt, what does that tell you about the speed of adoption we’re about to see in Big Law? I’m interested in hearing this answer. Because we’re usually so slow.

Matt Samuels (22:43)
Yeah. I can’t speak for the law firms. We have a number of law firm clients or customers. I’m sure the same is true for many other AI clients. I think the speed here is faster than I would have expected before joining Anthropic. Part of the reason is the utility is so obvious.

I think it’s become more obvious in the past couple of weeks and months with tools, not only from Anthropic but from other companies. We might chat about this a little later, but Claude Code is a product we have that’s been pretty MCP and plugin-forward, and you can do a lot of things there. Historically, it’s been a little scary if you’re not a technical user to do your work in the terminal. We released a product more for other types of knowledge workers, like lawyers, called Cowork, and I’m a heavy user of it myself. The amount better I am at my job, and the amount I am able to triage issues and give my internal clients good legal service using that tool, is amazing.

And it’s largely because of MCP. We have a lot of Slack and Gmail and other conversations. Being able to pull them in and ask in natural language, what’s going on here? Or for those of us who are millennials, ELI5, explain like I’m five, what’s happening with this complicated technology? It can do that. The models are smart enough to know where to look within Slack to find that information and then digest it for you.

There are a lot of other examples of things I and others on our legal team and other teams have done. But once you’ve played around with these tools, you see the immense value. Law firms have a pretty standard set of tools, places where their enterprise data lives. A lot of those have MCP servers. So I think it’s about getting yourself set up and playing around with it. You’ll see pretty amazing gains in the knowledge work you do.

Greg Lambert (24:47)
That’s my number one advice. Get in there and play around with it. See what it can do.

Matt Samuels (24:51)
Type of person.

Greg Lambert (24:52)
It worked pretty well. So, Den, I wanted to talk about, we mentioned it briefly, with things like Cowork and Code, but there’s a real big push now toward agentic workflows. Again, I think we’ll probably talk about this for the rest of the conversation. But you mentioned that MCP isn’t only about searching the data or reading the data, it can act on the data. I break it down like this. We’re at a point right now where it’s not just reading and giving you back an answer, it’s reading and writing and taking next steps. So do you mind describing a workflow where an agent doesn’t just find a contract within the DMS or within a product like Ironclad, but actually helps redline it or draft it and does some follow-up email in the lawyer’s voice with some options we have?

Den Delimarsky (26:03)
A lot of the capabilities you just described go beyond the protocol itself. The protocol is the conduit for a lot of the requirements, the ability to access documents, the ability to read them. Ultimately, the agent harness is the one responsible for doing things with MCPs or MCP servers.

So you would have, for example, an agent that has access to a myriad of MCP servers. Servers can expose a number of primitives. Things like tools, as Matt called out, and tools are basically functions. So we can say my server can read a Slack conversation, send a DM, read a Gmail thread. It can have resources, which can be documents or database records or anything like that. It also has primitives such as tasks, which is basically asynchronous processing. I call them atoms myself because they’re components to this.

Essentially, what that enables is saying, hey, I need you to go do something, and I’ll do something else while you do that. I’ll kick off a task on some remote MCP server, and you’ll get back to me when you’re done. When you start assembling these together, you start seeing the true power of these agents and MCP servers because they are now able to pull documents, pull different resources, database records, combine them, and realize, for this one, we need to go do some digging through another database. It might be a little slower, so we’re going to kick off that task and get back to it whenever it’s done, but I’m going to continue building the set of requirements for my next case or anything of that nature.

So MCP is becoming the plumbing for a lot of these agentic harnesses where, by themselves, they can trigger actions, think through requirements that need to be done, break them down into a set of tasks, and then execute those tasks. But to successfully execute those, you need those primitives, and this is what MCP exposes. You have those primitives that now the agent can act on and act with.

That did not exist before. If you look back four years, when the first models were coming out, you had the basic experience of asking the model something fun and seeing it answer. Okay, that’s great. But now they can actually act. If you think about it, MCP enables your agent to have actual robotic arms to go and reach for the right data, the right applications, the right content it needs to then act and put together whatever you’re looking for, whatever task you’re trying to accomplish.

Greg Lambert (28:57)
Yeah, yeah, I’ve written a story about this and we call it giving the AI hands to work with. It can actually do it. It’s a really exciting time. I know it’s been probably since November that this has taken on and caught fire. But watching what Code and Cowork and Codex and other things out there are doing, it’s a different AI than we had a year ago. It’s pretty exciting.

Den Delimarsky (29:37)
It absolutely is.

Matt Samuels (29:37)
One analogy for it, because you have a lawyer and CISO and kind of legal-adjacent audience, I think a helpful way to think about this and explore how it might be useful is to imagine what tasks you actually do as a law firm partner.

Your associate, I remember taking the bar exam and being kind of surprised, like wow, you are expecting me to answer this legal question based on my latent legal knowledge. That’s what a pre-MCP LLM can do, right? All it has access to is what it was trained on. A much better associate can do legal research. They can send an email. They can upload something to a database. So think of the tasks you normally do as a lawyer and how you delegate things to associates or junior partners. If you start thinking of the models this way, and Den mentioned the harness, that’s kind of the rules of the road for your law firm’s rules. I think that’s an interesting way to think about it, and you can get creative once you break things down into component tasks and the actions that you or the associate would take.

Marlene Gebauer (30:42)
All right. So Anthropic recently launched the MCP apps. I want to pull on this a little bit. That’s moving from the text-based chat to these interactive interfaces, sort of what you were talking about now. I would say we run the gamut in terms of people’s abilities and experiences in the legal community. There are some people who are going gangbusters. They’re trying everything. They get it. And there are other people who are still dipping their toe into the chat and trying to figure out how that’s going to work.

Plus, we have these discussions about how this work is going to impact the workflow in terms of staffing, both for juniors and for adjacent professionals. So how do you see lawyers and other adjacent professionals using this widget inside a chat to manage a deal or review deposition transcripts or timelines or something like that?

Matt Samuels (31:55)
Yeah, I’ll take this and then Den can field more on the app side, because I think I’ll go back to my lawyer training. I think one example for deposition transcripts and one for document review and privilege calls makes sense.

As a lawyer, you’re used to the surface you most commonly interact with. So when you’re thinking about a deposition transcript, it has this crazy Arial font and it’s on this page and some things are in bold, some things aren’t, and you have the reporter note. What’s going to look different, what an MCP app will allow you to do, is pull up the actual visualization of what that looks like. So it feels like you’re much more natively looking through it, and you can type into Claude, hey, show me that point again that opposing counsel made. I forget what page it was on, maybe page 40. You can ask a question that ill-formed, and if the deposition transcript company made an MCP app that’s well done, it would literally find the relevant text and populate the page.

Then I think what’s cool about MCP apps is you can theoretically highlight it or do other transformations, and it will ping back to the actual source of record and update it. So it’s a way to help you visualize things in the way you’ve come to appreciate because you’ve been working with deposition transcripts for 20 years.

To use another example, maybe a more junior associate is in some sort of, I’ll use Relativity because that’s what I was familiar with. If they made an MCP app, you could start doing your doc review in Claude. It’s still all happening through Relativity, but you get to see the documents populate. You can say, wasn’t there another document like that? And Claude will know how to find that document, surface it to you, and then you can do the privilege review on that. There are endless things you can do. But I think everyone is more comfortable working in the format these great software companies have developed, so it allows you to have that native experience within the LLM client.

Marlene Gebauer (33:59)
As little friction as possible.

Greg Lambert (34:01)
Yeah. Well, Den looks like he wants to fill in some gaps here, I can tell.

Matt Samuels (34:02)
Anyway.

Den Delimarsky (34:02)
Yeah, yeah. I think Matt did a wonderful job explaining the core benefit of MCP. But I think the next tier of evolution in the workflow goes back to what we talked about earlier, because the power of MCP is composability.

It’s one thing when you have the app itself. So context-switch reduction is a huge benefit right away. I’m working in Claude. I don’t need to copy and paste things from my document on one screen and Claude on the other and transfer them manually. It’s Claude, can you go do this? I can highlight something inside the MCP app, like Matt called out, and then say, okay, can you pull in all the conversations we’ve had in the past year around this topic or this particular case? It’s going to look through your Slack and Gmail or wherever else and then pull that context in.

So now you’re in this shared workspace between you and your agent, in this case Claude, that has access. You’re visually looking at a specific case, at a specific set of defendants maybe, and you want to ask, okay, what did we email them before? You can do that through multiple MCP servers that are connected. You’re not constrained to one task.

Think about how you would do this work before MCP existed. You’d have Gmail in one tab, something else in another tab, Slack in another. Then it’s okay, let me search through Slack. Let me find these conversations. Let me review the manual and see if this is right. Let me go through Gmail. That’s a lot of toil. It’s a lot of manual work that MCP connected to an LLM removes, because now you can ask the model, go. You have access to Slack, you have access to email. I’m highlighting this in my MCP app. Go find me the relevant information. Then you’re operating within the context of the same space.

Marlene Gebauer (35:55)
One of the things I particularly like about this is we sometimes have discussions about which AI tool to use for certain things. Is it a productivity tool or is it a legal tool? We spend way too much time talking about this and what should be used for what. But in your scenario, they would both be connected, and you would be asking a question and they would both be responding based on what they should be used for. Am I understanding that correctly?

Den Delimarsky (36:31)
Yeah, for sure. And because you have access to all this information, you can use this to start drafting things too. You can also have another MCP server that starts a new Google Doc and says, can you put all the notes we’re working on there, summarize them as we go, so I have that one point of reference to go back to? You’re not manually jotting down every single note and every single thing you’ve done. You can have Claude do this for you. Because it has access to this wealth of different tools and services, it can plug in through all of them, as long as they have an MCP server.

Matt Samuels (37:04)
Yeah, it did.

Greg Lambert (37:05)
I’m curious, because we talk a lot about working directly in Claude. I know tools like Harvey are also looking to maybe go the other way, where you can go there and use Claude within their environment as well. Do you see MCP as something where, with the context switching between programs, technically I could be in Relativity and have Claude working on something while still working in Relativity? Does it work both ways, or does it only work from Claude out to the other tools?

Matt Samuels (37:47)
Yeah, it should work bi-directionally. I think there are probably questions about latency and when it gets updated, but that’s true of every tool. I think about Google Docs back in the day. If two people are editing the same section, you get that little conflict message. I think it can update the system-of-record document, but you have to be thoughtful about timing.

With Harvey itself, as they use the APIs of a lot of these LLMs, they themselves are an AI client, so they might have MCP support too. And you can configure within Harvey the admin controls we’ve talked about for Claude.

Greg Lambert (38:28)
All right, so I want to jump into something we’ve all heard about over the past few weeks, and that’s skills. The idea that you can package up these skills into essentially a markdown file that maybe packages up the firm’s standard operating procedure or type of voice into these portable files that can be read by and used by AI.

So what would be your advice to a firm on using skills to help with things ranging from junior associates learning the firm’s style to other use cases where you see skills being packaged up to do useful work?

Matt Samuels (39:20)
Yeah, I think skills are really complementary to MCP servers, and they work in many of the same ways. Claude will understand when you’ve asked it, for lack of a better term, trigger words, and then say, okay, you want me to use this skill. Now I know I’m supposed to get that in the sequence of events. I’ll have the knowledge embedded in the skill.

So examples we’ve seen are, hey, this is how we do NDAs. If we deviate, you can package the rules around when you deviate from an NDA, when something needs to get escalated to a particular person. Tonality and the firm voice, or maybe there’s a set of terms you don’t like to use in briefs. Anything you think about as a procedure is helpful to put into a skill. I’m sure Den might have more creative ways to use them, but those are some of the examples we’ve seen.

Greg Lambert (40:12)
Yeah, Den, hit us. What can we be using skills for?

Den Delimarsky (40:14)
I have very little to add to what Matt described. He nails it. No, I think that’s accurate. It’s basically a way to codify processes or things you want done in a consistent way.

People often confuse skills and MCPs and say one replaces the other. Really, as Matt said, they complement each other. Skills can refer to MCP servers. So for example, you can define something like, hey, if I’m ever working on a case that requires data from system X, use this MCP tool to go get that data. And when you return the data, make sure it’s formatted with this table heading in this font, and make sure to also put it into Google Docs so it’s taken as a note with this document format.

So you’re essentially creating, as I refer to them, SOPs, standard operating procedures, saying if I want to do X, here’s how to do it. Within a skill, you can refer to MCP servers, write custom scripts, call applications that run on your machine.

For example, especially where you’re dealing with legacy applications, something that runs on your computer and hasn’t been updated in 20 years but is so ingrained in the ecosystem that it’s still there, you might not have an MCP server for it because maybe the company isn’t going to build one. So how do you use that tool? You can encode it into the skill and say, by the way, use this application, launch it, and inside of this application you can do X, Y, and Z. This application has a command-line interface, and here’s how you can, for example, insert a document, share it with somebody else, and so on.

You encode a lot of these things, and ultimately a skill can use and refer to many different MCP servers. It also reduces the burden on the model, because when you invoke a tool based on your prompt and say, look up the conversations on this topic, the model then has to think, okay, do I need to look in Slack? Gmail? Some other task board? With a skill, you can be explicit and say, if I ask you to do this, always use this tool. So the model knows, okay, you asked me to review an NDA document. I know exactly the tool I need for this. I’m going to use that. So it speeds up your process and helps you save a little on tokens too.

Greg Lambert (42:41)
Yeah, sounds interesting. Before we wrap this up, I want to give you guys a chance. Is there anything law firms should be paying attention to, tools that are already out there that we may not be taking advantage of? Is there anything?

Marlene Gebauer (43:01)
What should we be preparing for?

Matt Samuels (43:04)
I’ll do a plug for Cowork. We talked about it earlier. Skills and MCP servers, we talked about those. There’s a kind of abstraction of those called plugins that we created first for Claude Code and now for Cowork. I think other companies are adopting these. It’s a way to package these things together.

So you could have a plugin, I mentioned NDAs before, you’ve got the NDA plugin. It might connect to Ironclad or DocuSign or whatever MCP servers you might have over there, but then it has the skills with your firm voice and you can also do other things. You can have little webhooks or subagents that you can define for more technical users. An example of one of those might be, check this against, check to make sure the user of this plugin has the right authority to sign or something like that.

I think there’s this cool new abstraction that’s been created. And one thing, as we were chatting earlier, the composability piece really is a huge unlock. I’m still trying to make better use of it myself. But when Den was sharing an example before, man, what if you found this golden document and it automatically added it to the timeline and added it to your proof outline? Once you start thinking about things that way, at the end of the day, what are you doing as a lawyer? If you’re a litigator trying to prove a case, how can you more efficiently get the data that used to be sitting in Gmail or Relativity into the artifact you’re going to use to win the case? That’s super cool.

Greg Lambert (44:40)
Den, anything to add to that?

Den Delimarsky (44:42)
I’ll say, as uncreative and cliché as it sounds, because we talked about this earlier, model capabilities now are not the same as they were a year ago, two years ago, three years ago. So the best way to be in the know is to use the tools. You have access to Claude, use Claude. Try it. Try it for reviewing documents, for doing deep research, for connecting to your data sources and applications through different MCP servers, and see what output you get.

I’ve seen the highest success rates from people who actually start experimenting and saying, huh, I wonder if Claude can do this. They start asking Claude and then realize, hold on a second, there’s an MCP server for this. Maybe I should do this. The fastest way to get up to speed, and then realize how big an unlock a lot of the model advances are together with MCP, is to try it.

Marlene Gebauer (45:38)
I had a question as I was thinking about this in terms of knowledge management in organizations. It seems there is a real demand for knowledge to be in real time and specific to the actual instance, as opposed to how we’ve done it in the past, where it’s much more structured and formal. You’re going out to templates and looking for best practices. This seems more fluid. I’m wondering if you guys can comment on how what we’ve talked about impacts that.

Matt Samuels (46:20)
Yeah, I think the way I think about it is each company is going to have to think through what its knowledge management stack looks like and what best practices are. Skills can be helpful for this. Maybe you have a skill that says if you’re using Claude for X, Y, and Z, make sure part of it updates the team’s channel for this case, or make sure it updates the timeline, if there’s a timeline that’s really a system of truth for the case.

But I do think it’s incumbent on each company to think creatively now, because I 100 percent agree with you. There’s so much net-new data being created, and I think the CISOs and the ops teams and the lead lawyers need to be thoughtful. What does a system of record look like for what we’re doing?

Den Delimarsky (47:09)
Yeah, I’ll add that I think there’s a paradigm shift that’s going to happen, and it’s important to account for it. Before, we thought about knowledge management through the lens of who in my firm can access this and be able to learn from it and use it. Now the question is, is this also helpful to agents? Whatever format I’m storing the data in, will an agent be able to successfully access it and act on that data to provide me a summary, insights, or recap?

I’ve seen it too many times where folks have data and it’s stored in some super proprietary binary format that only one application can access. Of course that’s going to be problematic for agents, versus storing it in formats that are more open. I’m not making any opinion about which format to store things in, but as folks think about the transition to this new world where agents are doing much more than before, how do you enable those agents to have the same access to the context and knowledge of your firm, your organization, or other partners you’re working with, and not only humans?

Marlene Gebauer (48:20)
Yeah. And it sounds like having good, organized data is still important. So that hasn’t changed.

Den Delimarsky (48:29)
No, the importance of that is only going to increase, because the agents themselves rely on some kind of taxonomy, some kind of organization. If I ask an agent, go find me a document related to case X, Y, and Z, it should know, okay, cases from this year are here, instead of going through a folder with a million documents and saying, give me an hour to sort through them. You still have to optimize for discovery and organization and having some kind of predictability.

Marlene Gebauer (49:00)
Okay. So before we get to the crystal ball question, what are some of the uses you guys are using in your personal lives that you’re using AI to help with? You’re the technologists, so what are you using in your life?

Matt Samuels (49:23)
I’ll go first, because then I’m curious for Den to come over the top and really impress.

Marlene Gebauer (49:26)
It’s not a competition. This is not a competition. All right. It’s always a competition. Go ahead.

Greg Lambert (49:30)
It’s always a competition.

Matt Samuels (49:33)
I definitely use it more as a power user at work than I do in my real life, but I’m starting to think more creatively about how to use it. If there are things in your life that are cumbersome, I think there are ways to unlock how to do them more efficiently.

And then on the knowledge side, I find myself, even when ChatGPT came out back in, I think it was 2023, thinking, oh, I can learn about any subject in the world in a way that’s more personalized to me and ask follow-up questions. Things like investment advice, or hey, someone told me about this, can you tell me about it? In the way YouTube was a big unlock for getting more knowledge about the world, I still use it for that basic purpose.

Den Delimarsky (50:16)
I think my big unlock, a lot of what Matt described is exactly right. It’s precisely what I want to use these new modalities for. Basically, reducing toil in my life.

It’s a big personal relief to use a lot of these tools almost like a second brain. Previously, we’d be in a meeting like this, in a recording, and imagine if this is a work meeting. I need to jot down notes on paper and be like, okay, Greg said this, Marlene said this, Matt was talking about this, let me follow up on this. That takes away attention, because now I have to follow it manually and figure out, hold on, did Greg say this?

Now we have tools that auto-transcribe it. Now it’s a Google Doc. Then I can go to Claude and say, hey, last week, Greg, Marlene, Matt, and I were talking. Can you summarize the action items for me? And then it’s like, yep, you talked about this topic. It’s such a huge relief that I don’t need to worry about a lot of these toil things. A lot of the capabilities are only going to get better, and I’m super excited to focus my life on things I care about and am excited about, versus these menial tasks.

Greg Lambert (51:32)
Well, good. Thanks for sharing all of that. So we are at the crystal ball question. Den, I’m going to start with you, and then we’ll go to Matt. Looking into the near future, what change or challenge do you see in your crystal ball that you think we need to be prepared to face?

Den Delimarsky (51:53)
I’ll give you a very unsatisfying answer. Having seen the last six months, I have no answer for what the crystal ball holds. I don’t know. Anything I say right now is going to be completely wrong in three months, six months, or a year. So I’d say to me, we’re living through a time of very, very fast changes. The best way to take advantage of this is to use the tools and try to figure out how to apply them to your life. That’s going to be the easiest way to ride the wave. That’s my answer. It’s very unsatisfying.

Greg Lambert (52:30)
No, it’s probably the most truthful one, though. All right, Matt, how about you? You want to take a swing at this one?

Matt Samuels (52:37)
Yeah, strong plus one to the I have no idea. But I’ll say two things that I think about in a semi-worrying way are the loss of the art and the craft of being a lawyer, people delegating too much to these tools and losing the forest for the trees, or not paying attention to the why and the strategic calls.

I’m optimistic people will use these more for the low-leverage tasks and focus on the strategy and the craft. But it’s not hard to imagine a world where you’re negotiating a contract and the other side is using Claude or ChatGPT to say, push back on all of these points and give me the strongest redline. Then the other side does that, and you are never actually compromising and thinking about what matters. So I worry a little about the profession and the loss of the human and, you know, we’re part of a trade together element of it.

Greg Lambert (53:35)
Well, Matt Samuels and Den Delimarsky from Anthropic, thank you both for coming in and having this conversation. I think our geeky audience is going to be busy building MCP servers over the weekend.

Den Delimarsky (53:52)
Yeah, and if you have questions, we’re always open to feedback. You can reach out. Again, it’s open source on GitHub, so go there and give us your feedback.

Marlene Gebauer (54:01)
That’s terrific. Thanks to both of you, and thanks to all of our listeners for taking the time to listen to The Geek in Review podcast. If you enjoy the show, please share it with a colleague. You can find us on LinkedIn and now on our Substack channel.

Greg Lambert (54:20)
And, well, Den, you talked about going to GitHub. Is there any other place, Matt, where people who want to find you should go? Is it the same place?

Matt Samuels (54:31)
Yeah, if you end up being an Anthropic customer or potential customer, you might end up chatting with us through that lens. But for contributions to the protocol itself and feedback, I very much agree with Den. Go through the, it’s now part of the Agentic AI Foundation under the Linux Foundation. And if it’s not Den who ends up responding, it’ll be someone else in the giant community that’s been built around this.

Marlene Gebauer (54:52)
And as always, the music you hear is from Jerry David DeCicca. Thank you so much, Jerry. And bye, everybody.

Greg Lambert (54:58)
Bye.