LinkedIn recently disclosed that it has been using user data to train its AI systems without seeking explicit consent. In a notable update, the platform has now limited users’ ability to opt out of training that has already occurred, restricting this option solely to future uses. This decision raises significant concerns regarding data privacy and user control over personal information.
As part of its adjustments, LinkedIn general counsel Blake Lawit confirmed that the company will amend its user agreement and privacy policy to better clarify how user data powers its AI capabilities. The forthcoming update, effective from November 20, aims to provide more transparency regarding data collection and how it informs LinkedIn’s AI services.
According to the updated policy, LinkedIn informs users that their personal data may be utilized for a variety of purposes, including the development of AI models, enhancing the personalization of services, and gleaning insights through AI-powered systems. Such data can be collected through user interactions involving generative AI, preferences, posts, and general activity on the platform.
Data collected by LinkedIn remains in their systems until users actively delete AI-generated content. The company advises users to utilize its data access tool if they wish to delete or request the removal of previously collected data.
One critical risk highlighted is that users who provide data inputs to generative AI features could inadvertently see their personal information reflected in outputs generated by the AI. LinkedIn claims it strives to minimize such occurrences by employing “privacy enhancing technologies” to redact or exclude personal data from training datasets.
LinkedIn allows users to opt out of future AI training by navigating to the “Data Privacy” section of their account settings. Users can disable the option that permits the collection of data for generative AI improvements. However, for individuals in the European Economic Area and Switzerland, special privacy protections apply, and these users cannot opt out as they were never opted in initially.
Users can also contest the use of their personal data for generative AI models that do not relate to their LinkedIn content, such as those used for personalization. This captures regulatory requirements necessitating platforms to justify data collection practices while allowing users avenues for objection.
Despite the controversies surrounding data use, LinkedIn has pledged to follow AI principles designed to mitigate risks associated with AI deployment. However, users remain responsible for ensuring that any information they share, generated AI content included, adheres to LinkedIn’s community guidelines.
LinkedIn’s recent practices surrounding AI data usage have sparked debates about user privacy and consent in the age of artificial intelligence. By providing limited options to opt out of historical data usage and announcing clearer policies, the platform highlights ongoing challenges in balancing data utilization with ethical considerations. Users are encouraged to remain vigilant about their data privacy and actively manage their digital footprints.