Exclusive: LinkedIn to train AI on UK members’ profiles

LinkedIn will begin using the profiles, posts, resumes and public activity of UK members to train its generative AI models from 3 November, City AM can reveal.

The Microsoft-owned professional network’s new terms of service, which were updated on Thursday, confirm that while private messages are excluded, most public-facing data will feed into AI systems designed to generate content and enhance features across the platform.

City AM understands there will be no proactive notification prompting members to opt in or out, meaning many users may be included by default unless they actively check their settings.

The company has noted in its terms that users can opt out via the ‘data for generative AI improvement’ setting, but this only applies going forward; any data already incorporated into training remains part of the models.

LinkedIn has framed the change as a way to “enhance your experience” and “better connect our members to opportunities”, highlighting applications from recruitment suggestions to AI-assisted content creation.


Play Video

Subscribe to the Boardroom Uncovered show from City AM here.

Opt-out, not opt-in

The decision to rely on ‘legitimate interest’ to process user data places the onus on members to protect their information.

For many, this could present a blind spot where professional posts, resumes, and comments, which often contain detailed personal information, may be used to train AI models without explicit awareness.

LinkedIn has said it sees these measures as a safeguard.

A source close to the matter indicated that users under 18 and private messages are automatically excluded, and that feedback, such as thumbs up or down on AI-generated suggestions, will be incorporated to reduce inaccuracies and harmful outputs.

Even so, privacy advocates argue that the approach places the burden on individuals rather than the platform to ensure informed consent.

The issue is symptomatic of a broader industry trend, with companies, mainly tech behemoths like Meta, having already resumed AI training on UK users’ public posts after regulatory scrutiny forced pauses earlier this year.

While such initiatives promise efficiency and new features, they also underscore the tension between innovation and privacy in a post-Brexit regulatory landscape that grants more flexibility than the EU’s stringent ‘general data protection regulation’.

LinkedIn’s use of generative AI

LinkedIn presents AI training as a tool to improve professional networking, from helping recruiters surface candidates more effectively to suggesting posts and profile updates for members.

In practice, this means that much of the public-facing activity on the platform – from comments on articles to group engagement – could be processed by AI models.

City AM understands that while LinkedIn’s intentions are framed around user benefit, the reality is more complex.

The default inclusion of user content, without active notification, risks undermining confidence in the platform’s stewardship of personal data.

Past interactions, resumes and posts may be used to train models that power not only LinkedIn’s tools but also, via affiliates, Microsoft’s broader AI ecosystem.

Safety and transparency

The company has stressed that AI is also applied to improve platform safety and compliance, including tools that detect harmful content and reduce errors in recommendations.

The move forms part of a wider commitment to responsible AI use, a source close to the matter argued, noting that training methods and feedback loops are designed to reduce potential risks to users.

Despite these claims, experts caution that the lack of proactive opt-in or notification introduces uncertainty.

Members who do not actively engage with their account settings may be unaware of how their professional histories and commentary are being repurposed.

For many, the default reliance on legitimate interest may feel at odds with expectations of professional privacy in a social networking context.

For UK users, the update underscores a critical moment in the intersection of professional networking and generative AI.

While LinkedIn emphasises enhanced user experience and career opportunities, the broader implications for consent, data governance, and trust remain under close scrutiny.

In an era where personal and professional information is increasingly fed into algorithmic systems, the responsibility to communicate clearly and uphold privacy standards has never been more pressing.

Related posts

First Trust Global Portfolios Management Limited Announces Distributions for certain sub-funds of First Trust Global Funds plc

First Trust Global Portfolios Management Limited Announces Distributions for certain sub-funds of First Trust Global Funds plc

First Trust Global Portfolios Management Limited Announces Distributions for certain sub-funds of First Trust Global Funds plc