CNIL focuses on privacy-friendly AI

France’s privacy watchdog, the CNIL, has unveiled its action plan for artificial intelligence (AI) and outlined its focus areas in the coming months. The CNIL aims to develop AI systems that respect personal data and protect individuals. It plans to establish clear rules for the protection of personal data and contribute to the development of privacy-friendly AI. The regulator will also examine how AI systems impact individuals and support innovative players in the local AI ecosystem.

Data scraping, particularly by AI model makers, is a key area of concern for the CNIL. Data scraping involves collecting data from the internet to train AI models, such as large language models. However, this practice raises legal challenges in Europe due to the General Data Protection Regulation (GDPR) requirements. AI systems must have a legal basis for processing personal data, which is limited for technologies like ChatGPT. The CNIL will prioritize protecting publicly available web data against scraping.

The CNIL’s action plan also highlights the importance of protecting user data throughout the AI process, ensuring fairness and transparency in data processing, preventing bias and discrimination, and addressing security challenges. The regulator plans to issue guidelines on data sharing and reuse, the application of purpose limitations to AI systems, and the management of individual rights.

OpenAI’s ChatGPT has faced scrutiny from European data protection authorities, including investigations and enforcement actions. As the EU works on a risk-based framework for regulating AI, existing data protection authorities will play a role in enforcing the regulations. OpenAI and similar companies may face enforcement actions and penalties for non-compliance with privacy laws.

The CNIL is actively examining various aspects of AI systems, including the use of scientific research for training databases, sharing responsibilities among entities involved in AI development, data selection for training, individual rights management, shelf life of training data and models, ethical considerations, and audit and control of AI systems. The regulator has already received complaints against OpenAI and is working with the European Data Protection Board to harmonize approaches to regulating ChatGPT.

To support compliance with European rules and values, the CNIL has established a regulatory sandbox and encourages AI companies and researchers to participate. The sandbox provides a platform for developing AI systems that align with personal data protection rules.

Overall, the CNIL’s action plan reflects its commitment to protecting personal data and ensuring privacy-friendly AI systems. It aims to address the challenges posed by AI technologies, promote transparency and fairness, and contribute to the development of robust regulations in Europe.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts