LinkedIn, the platform where professional networking and self-promotion collide, now finds itself in the crosshairs of a major class action lawsuit. The allegation? That LinkedIn has been secretly using Premium users’ private InMail messages to feed AI training models without their knowledge or consent. For a platform that markets itself as a bastion of professional trust, this is a serious breach of both expectations and contracts. As a side-note, while I am not a premium LinkedIn user, I frequent LinkedIn because it’s one of the few (mostly) apolitical social media platforms out there. And while many of us may scoff at the idea that LinkedIn would train AI models on the public content we post, the collecting of InMail content is quite disturbing, if proven true.

[Case: De La Torre v LinkedIn Corp, U.S. District Court, Northern District of California, No. 25-00709]

The lawsuit, filed in California federal court, claims that LinkedIn violated not only the privacy of its users but also their trust. Premium subscribers, who shell out extra for the promise of enhanced privacy protections, allegedly had their sensitive communications, think job negotiations, intellectual property discussions, and salary talks, used to train generative AI models. To make matters worse, LinkedIn reportedly attempted to sweep this under the rug by quietly updating its privacy policy after being called out.

Consider the type of content often found in LinkedIn InMail messages. These aren’t just generic notes asking to “connect” or “grab coffee sometime.” They often include deeply personal and highly sensitive information, such as salary negotiations, business strategies, discussions of intellectual property, and even private details about career moves or startup funding. The exposure of such content through unauthorized AI training would be more than just a privacy violation; it could have life-altering consequences for the individuals involved, from damaged professional reputations to lost business opportunities.

Note: Monthly InMail Cost for Premium Users

Here’s the kicker: the lawsuit doesn’t just allege privacy violations. It claims breaches of federal laws, contractual promises, and California’s Unfair Competition Law. The plaintiff seeks damages, injunctive relief, and even “algorithmic disgorgement”, essentially demanding that LinkedIn delete AI models trained on the improperly used data. Is that even possible??

And then there’s the Microsoft connection. LinkedIn’s parent company stands accused of possibly integrating this data into its broader ecosystem, raising questions about where this sensitive information could resurface. Could confidential LinkedIn messages one day power a Microsoft Teams suggestion or autocomplete a Word document? The implications are staggering.

LinkedIn, for its part, denies the allegations, calling them “false claims with no merit.” But as the details of this case unfold, the broader tech industry faces a critical moment. At a time when generative AI is reshaping how companies innovate, this case underscores the urgent need for transparency and consent in data usage.

For legal professionals, privacy advocates, and anyone who’s ever sent an InMail, this story is more than just a cautionary tale—it’s a wake-up call. Many of us joked a few weeks ago when we thought they were only scraping publically available post, where most AI results would start with “I am excited to announce I have joined ____ company this week…”. But this is a much more serious allegation. If LinkedIn, a platform built on professional integrity, can allegedly fumble its privacy commitments, what does that say about the wider industry?