Navigating the Intersection of the AI Act and GDPR: A Guide for Organisations

As organisations embrace the integration of Artificial Intelligence (AI) into their operations, it’s essential to consider the implications of existing regulations, particularly the General Data Protection Regulation (GDPR) and the upcoming AI Act. Both frameworks present overlapping areas organisations must address to ensure compliance and mitigate risks.

Revisiting GDPR Compliance

Implementing AI technologies necessitates a reevaluation of GDPR compliance. The first step is identifying compliance gaps by aligning with ISO 42001:2023, which focuses on AI controls. Ensuring that your AI systems are designed with GDPR principles is crucial for establishing a foundation for compliance with the AI Act.

Scope and Territorial Application

The AI Act and GDPR have broad territorial applications, affecting organisations beyond the EU’s borders. Article 2 of the AI Act mirrors the extraterritorial reach of the GDPR, making it imperative for global organisations to align their practices with these regulations.

Data Training and Compliance

AI systems rely on vast data for training, often including personal data. Organisations must ensure compliance with GDPR’s requirements under Article 6 regarding legal bases for processing personal data and adhere to principles such as consent and data protection outlined in Article 5(1). This is particularly relevant for AI systems operating under the AI Act.

GDPR Principles for AI Development

Developers must prioritise data minimisation, purpose, and storage limitations from the onset of AI system design. Data protection should not be an afterthought but an integral component of the development process.

Privacy by Design

The GDPR mandates that AI systems are developed with privacy compliance in mind. This approach may incur additional costs but is necessary, especially for high-risk AI applications requiring preventive data protection assessments and the involvement of data protection authorities.

Accountability and Data Accuracy

Maintaining data accuracy is paramount. Organisations must recognise their accountability for any inaccuracies AI systems produce, such as those stemming from “hallucinations” or incorrect training data. The case of ChatGPT being blocked in Italy for delivering inaccurate data underscores this risk.

Challenges in Enforcing Data Rights

Generative AI introduces complexities in upholding data rights such as Erasure, Rectification, Access, and Objection under GDPR. Data Protection Commissions may face significant challenges, which could lead to enhanced transparency and accountability within AI systems.

Risk-Based Approach

Both the AI Act and GDPR adopt a risk-based approach to regulation. The AI Act categorises AI systems based on risk levels, essential for compliance. Organisations must assess their AI systems’ nature, scope, and risks to align with GDPR requirements.

Risk Management

Effective risk management is critical. The AI Act mandates a robust risk management system for high-risk AI technologies, while GDPR requires Data Protection Impact Assessments (DPIAs). By aligning these assessments, organisations can enhance compliance and mitigate risks.

Penalties for Non-Compliance

Both regulations impose significant penalties for non-compliance, including fines of up to €30 million or 6% of total global turnover. Understanding the potential financial implications of non-compliance is essential for risk management strategies.

Supervision and Enforcement

The enforcement mechanisms for both regulations will involve collaboration between European and national authorities. As the AI Act is implemented, it will be vital to ensure that enforcement structures effectively manage the overlap with GDPR, maintaining a consistent approach.

Moving Toward Harmonious Legislation

The AI Act aims to complement the GDPR, presenting an opportunity for harmonised regulation across the EU’s digital economy. However, achieving this will require increased collaboration among national authorities and transparency from organisations utilising AI technologies.

Focusing on Transparency, Trust, and ISO Standards

To foster responsible AI development, the AI Act emphasises transparency and accountability. Organisations should evaluate their AI systems against ISO standards, such as ISO 42001:2023, to ensure compliance with the AI Act’s requirements, which include:

  • Transparency obligations to facilitate informed user choices.
  • Reporting obligations for post-market monitoring and incident investigation.
  • Horizontal obligations for all stakeholders involved in AI development and use.

In conclusion, as organisations navigate the complexities of AI integration, understanding the overlap between the AI Act and GDPR is crucial for compliance and risk management. Businesses can establish a robust framework for responsible AI deployment by prioritising data protection and aligning with relevant standards.

Now is the time for organisations looking to prepare for the future to act. Embrace the principles of GDPR and the AI Act to build trust and transparency in your AI systems.

LEAVE A COMMENT