The EDPB’s opinion on AI and data protection (28/2024) is a landmark guide for organisations developing and deploying AI models in compliance with the GDPR. The opinion addresses critical issues such as the conditions for AI model anonymity, the use of legitimate interest as a legal basis, and the repercussions of processing personal data unlawfully.
This document underpins responsible innovation and reinforces the GDPR’s role in protecting fundamental rights like privacy, freedom of thought, and the right to conduct business.
Key takeaways from EDPB opinion 28/2024
1. Understanding AI model anonymity
- Challenges in achieving anonymity: AI models trained on personal data are not inherently anonymous. The likelihood of extracting personal data directly or indirectly depends on the design of the model, the training data, and the deployment context.
- Case-by-case assessments: Supervisory authorities must evaluate anonymity based on the probability of personal data extraction. This involves analysing the likelihood of direct extraction or unintended retrieval through queries.
- Documentation requirements: Controllers must maintain robust documentation, including data minimisation techniques, model design measures, and testing protocols to prove the model’s anonymisation.
2. Legitimate interest as a legal basis for AI
- The three-step test: To rely on legitimate interest, controllers must meet the following cumulative criteria:
- Legitimacy: The interest must be lawful, clearly articulated, and tangible. Examples include developing conversational agents or improving threat detection.
- Necessity: The processing must be essential to achieve the stated purpose, and no less intrusive alternatives should exist.
- Balancing: Controllers must ensure that their interests do not override the fundamental rights of data subjects, especially concerning privacy and transparency.
- Balancing data subjects’ expectations: Controllers should consider whether data subjects could reasonably expect their personal data to be processed in the specified manner. For instance, public availability of data or the context in which it was collected plays a critical role.
3. Consequences of unlawful data processing
Three Scenarios: The opinion examines the impact of unlawfully processed personal data during AI development:
- Scenario 1: When personal data is retained in the AI model and used by the same controller during deployment.
- Scenario 2: When personal data is retained and processed by a different controller during deployment.
- Scenario 3: When personal data is anonymised post-development, allowing subsequent lawful processing.
Key implications: If personal data is unlawfully processed during development, controllers must conduct thorough assessments to ensure that subsequent processing complies with GDPR, including demonstrating effective anonymisation.
4. Mitigating measures
- Privacy-preserving techniques: The opinion highlights best practices such as differential privacy, pseudonymisation, and robust access controls.
- Testing and documentation: Controllers are encouraged to conduct regular tests, such as resistance to inference attacks or regurgitation risks, and document these efforts comprehensively.
- Transparency and accountability: Providing clear, accessible information to data subjects about processing activities and associated risks is paramount.
Who should engage with this opinion?
DPOs
DPOs will find this Opinion invaluable for shaping compliance strategies for AI projects, particularly in assessing legitimate interests and implementing anonymisation measures.
AI developers and data scientists
The technical insights into data minimisation, privacy-preserving methods, and testing protocols make this Opinion a critical reference for AI teams.
Legal and compliance professionals
Lawyers and compliance officers working on AI governance can leverage this guidance to navigate the complex interplay between GDPR requirements and AI innovation.
Regulators and policymakers
Supervisory authorities will benefit from this framework to ensure consistent application of GDPR provisions across jurisdictions.
How ITLawCo can help
At ITLawCo, we specialise in bridging the gap between cutting-edge technology and regulatory compliance. Our expertise in GDPR and AI governance allows us to provide tailored solutions to organisations navigating these complexities.
Our services
- GDPR readiness assessments: Conduct comprehensive audits of your AI systems to ensure compliance with GDPR principles such as data minimisation, accountability, and transparency.
- Policy development: Draft privacy and data governance frameworks tailored to your organisation’s AI use cases, including policies on anonymisation, web scraping, and legitimate interest assessments.
- Mitigation strategies: Develop customised mitigating measures, such as differential privacy implementations, pseudonymisation techniques, and robust access controls to address data protection risks.
- Training and awareness: Offer expert-led training for technical teams, legal professionals, and organisational leadership on AI and GDPR compliance.
- Advisory services: Provide strategic advice on the lawful use of AI, especially in high-risk areas such as generative AI, data scraping, and automated decision-making.
Partner with ITLawCo for responsible AI innovation
The challenges of AI governance require a multidisciplinary approach combining legal acumen, technical expertise, and industry insights. At ITLawCo, we empower organisations to innovate responsibly while meeting the highest data protection standards. Whether you’re developing an AI model or navigating the regulatory landscape, our team can guide you every step of the way.
Contact us today to learn how we can help your organisation thrive in the age of AI. Let’s build a compliant, innovative, and future-ready AI ecosystem together.