The Italian data protection regulator has fined OpenAI 15 million € for breaching GDPR rules in the processing of personal data in ChatGPT.
The fine was imposed mainly for using personal data to train artificial intelligence models without proper legal grounds, violating the principle of transparency and providing insufficient information to users. It was also found that OpenAI had no age verification mechanism, which posed a risk to children under the age of 13. This incident followed a data breach in March 2023 that the company did not report to the regulator. In addition to the fine, the company is required to conduct a six-month media campaign explaining the principles of ChatGPT, including the types of data collected and users’ rights to information processing.
OpenAI said the decision was disproportionate and would be appealed because the fine was 20 times the company’s revenue in Italy for the relevant period. At the same time, new guidelines for handling personal data in the field of artificial intelligence are being discussed in Europe.
In March 2023, Italy became the first country to temporarily ban ChatGPT due to data protection concerns. Access to the service was restored a month later after the elimination of some violations. The European Data Protection Board (EDPB) provides guidance on AI data processing and states that the GDPR does not apply to anonymized data after model training.
This case shows that EU countries are also strengthening data protection requirements for large high-tech companies such as OpenAI. It also emphasizes the importance of transparency in the use of personal data in artificial intelligence.