LinkedIn is using personal data from its users to train generative AI models, according to recent reports. Without explicit consent, the platform has automatically opted users into this process. According to 404Media, LinkedIn quietly introduced a new privacy setting and opt-out option ahead of an updated privacy policy, which now confirms that data from the site is being used for AI model training. This policy shift follows a broader trend in which major tech companies utilize user data for AI advancements.
The updated privacy policy states that LinkedIn may use personal data to improve its services, train AI models, and personalize the platform. LinkedIn claims that these AI models, which include generative tools like writing assistants, aim to enhance the relevance and usefulness of the platform. For users concerned about their data, LinkedIn allows an opt-out option under the “Data for Generative AI Improvement” section in the account’s privacy settings. However, the company clarified that opting out will not undo any training already done with user data up to that point.
LinkedIn further emphasizes in its FAQ that it uses privacy-enhancing technologies to remove or redact personal data in its training sets. It also notes that users in the EU, EEA, and Switzerland are excluded from this data usage for AI training.
Besides generative AI models, LinkedIn employs machine learning tools for personalization and moderation that do not generate content. Users who wish to stop their data from being used for these purposes must fill out a separate Data Processing Objection Form.
This development from LinkedIn comes shortly after Meta disclosed that it had scraped non-private user data for model training as far back as 2007.