27 Jan Processing of personal data and generative AI: The OpenAI decision
Authors: Francesco Torlontano, Laura Senatore, Lorenzo Covello
The decision issued against OpenAI, the ChatGPT platform developer and supplier, represents the outcome of a complex investigation carried out by the Italian Data Protection Authority which commenced in March 2023. The investigation began from a series of alleged violations which had made it necessary to adopt an urgent measure (see GPDP Decision No. 114 of April 11, 2023) aimed at temporarily restricting the processing of personal data of Italian ChatGPT users. The aforementioned decision also imposed a number of corrective measures on OpenAI aimed at ensuring processing activities involving data subjects using ChatGPT were compliant with European and Italian data protection legislation.
The breaches incurred by OpenAI
The violations ascertained by the Italian Data Protection Authority at the end of the investigation relate to various aspects concerning the processing of personal data carried out through ChatGPT. Specifically, according to the decision, the main breaches of data protection law committed by OpenAI include beaches of:
-
-
- Article 33 of the GDPR for failure to notify the Supervisory Authority about the data breach that occurred on 20 March 2023, which affected Italian users of the ChatGPT platform;
- Articles 5(2) and 6 of the GDPR for not having identified, in compliance with the principle of accountability, a legal basis for the training of ChatGPT’s AI algorithms prior to the start of processing;
- Articles 5(1)(a), 12 and 13 of the GDPR for failing to provide users with an appropriate privacy notice containing clear, transparent and up-to-date information about the processing of personal data;
- the principle of privacy by design (Art. 25(1) of the GDPR), as well as of Articles 8 and 24 of the GDPR for failing to provide adequate mechanisms aimed at verifying the age of users using the ChatGPT service; and
- the principle of accuracy under Article 5(1)(d) of the GDPR, as the outputs generated by ChatGPT could sometimes be inaccurate.
-
Corrective measures imposed by the Italian Data Protection Authority
Of particular interest are the corrective measures imposed on OpenAI by the Italian Data Protection Authority in its urgent decision issued in April 2023, within the framework of the preliminary investigation concluded with decision No. 755 of 2024.
It is worth noting the application of the powers provided for in Article 166, paragraph 7 of the Italian Personal Data Protection Code by the Italian Supervisory Authority for the first time and, in particular, of the administrative sanction of an injunction to carry out an institutional communication campaign aimed at promoting awareness of the right to the protection of personal data. Specifically, the Italian Data Protection Authority ordered OpenAI to draft a six-month communication plan on radio, television, newspapers and the Internet, to be launched after approval by the Authority in order to ensure effective transparency in the processing of personal data.
Additional aspects that pertain to the corrective measures implemented by OpenAI in fulfilment of the Data Protection Authority’s orders include updating and integrating the privacy policy with detailed information regarding the purposes of algorithm training and the methods and logic behind the processing of data necessary for the functioning of ChatGPT.
In addition, in compliance with the principle of accountability, OpenAI has identified legitimate interest as the legal basis for the processing of personal data for the purposes of algorithm training (the Italian Data Protection Authority had indicated consent of the data subjects or legitimate interest as valid legal basis for a lawful processing of personal data).
Moreover, in order to protect the personal data of underage persons, OpenAI has implemented appropriate mechanisms for verifying the age of users, such as the so-called “age gate” that implies that new users and users in Italy who have already registered on the platform, provide their date of birth so as to prevent access to individuals under 13 years of age (this is the minimum age for the use of the ChatGPT service).
Also in relation to the processing of personal data of underage users, OpenAI has decided to entrust the age verification activity to a certified third party, the company Yoti Ltd. Specifically, the solutions adopted include age verification based on an estimate made by means of a selfie provided by the user via the Yoti App or its website or through a scan of a user’s ID and the calculation of the age derived from the date of birth.
Conclusion
The Italian Data Protection Authority decision represents an important case in the regulation of AI-based technologies within the European Union. The measures imposed on OpenAI are not only a reminder of strict compliance with data protection provisions, but also a clear signal to technology companies that innovation cannot disregard the protection of the fundamental rights and freedoms of data subjects.