Barry Scannell: The EU AI Act comes into force – a comprehensive guide
Irish lawyer Barry Scannell provides a comprehensive guide to the EU’s new AI Act.
Today, 1 August 2024, marks a watershed moment in the regulation of artificial intelligence as the EU Artificial Intelligence Act officially enters into force.
This landmark legislation, formally known as Regulation (EU) 2024/1689, establishes a comprehensive framework for the development, deployment, and use of AI systems across the European Union.
As legal practitioners, it is crucial to understand the intricacies of this Act and its far-reaching implications for businesses, innovators, and society at large.
Background and objectives
The AI Act is the culmination of years of deliberation and negotiation within the EU, reflecting the growing recognition of AI’s transformative potential and associated risks. The primary objectives of the Act are to ensure the safety and fundamental rights of EU citizens, foster innovation, and establish the EU as a global leader in ethical AI development.
Key provisions of the AI Act
Risk-based approach
The AI Act adopts a tiered, risk-based approach to regulation, categorising AI systems based on their potential impact:
Unacceptable risk (prohibited practices)
The Act prohibits AI practices deemed to pose unacceptable risks to society. These include:
- Social scoring systems used by public authorities
- Exploitation of vulnerabilities of specific groups
- Real-time remote biometric identification in publicly accessible spaces for law enforcement (with limited exceptions)
- Emotion recognition systems in workplaces and educational institutions (unless for medical or safety purposes)
- AI-based manipulation techniques
- Predictive policing based solely on profiling
High-risk AI systems
AI systems used in critical areas such as healthcare, education, law enforcement, and critical infrastructure are classified as high-risk. These systems are subject to stringent requirements including:
- Risk management systems
- Data governance measures
- Technical documentation
- Record-keeping
- Transparency and provision of information to users
- Human oversight
- Accuracy, robustness, and cybersecurity
General-purpose AI models and AI systems
These systems are subject to specific transparency obligations, such as notifying users that they are interacting with an AI system.
General-purpose AI models
The Act introduces specific obligations for providers of general-purpose AI models (typically large language models like GPT4) such as those behind technologies like ChatGPT, particularly those deemed to have systemic risk. These obligations include:
- Conducting model evaluations
- Assessing and mitigating systemic risks
- Reporting serious incidents
- Implementing robust cybersecurity measures
Transparency and explainability
Providers must ensure AI-generated content is clearly labelled as such, promoting transparency and trust. Additionally, high-risk AI systems must be designed to allow for human oversight and interpretation of their outputs.
Regulatory sandboxes
The Act establishes frameworks for testing innovative AI systems in real-world conditions under regulatory supervision. This provision aims to foster innovation while ensuring compliance with the Act’s requirements.
Extraterritorial reach
The legislation applies to AI systems used within the EU, regardless of where the provider is based. This extraterritorial reach potentially impacts global AI development practices and requires non-EU companies to comply if they wish to operate in the EU market.
AI literacy requirements
The Act mandates providers and deployers of AI systems to ensure their staff and other individuals involved in AI operations possess adequate AI literacy. This includes understanding the technical aspects of AI systems, proper application during development and deployment, and interpreting AI outputs correctly.
Governance and enforcement
The Act establishes a European Artificial Intelligence Board to facilitate its implementation and drive the development of AI standards. National competent authorities will be responsible for the application and implementation of the Act, with the potential for significant administrative fines for non-compliance.
Penalties for non-compliance
The AI Act introduces substantial penalties for violations:
- Up to €35 million or seven per cent of global annual turnover (whichever is higher) for the most serious infringements, such as the use of prohibited AI practices.
- Up to €15 million or three per cent of global annual turnover for non-compliance with specific obligations related to high-risk AI systems.
- Up to €7.5 million or 1.5 per cent of global annual turnover for supplying incorrect information to authorities.
These fines are designed to ensure compliance and underscore the EU’s commitment to responsible AI development and use.
Implementation timeline
While the AI Act is now in force, its implementation will be phased:
- 2 February 2025: Rules on prohibited AI Systems and AI literacy take effect.
- 2 August 2025: Provisions for general-purpose AI models become applicable.
- 2 August 2026: Rules for Annex III high-risk AI systems and regulatory sandboxes come into play.
- 2 August 2027: Remaining provisions of the Act become fully applicable.
Implications for legal practice
The AI Act has significant implications for legal practitioners across various fields:
- Technology and IP law: Lawyers will need to advise clients on compliance strategies, intellectual property considerations, and potential liabilities associated with AI development and deployment.
- Corporate law: Legal professionals will play a crucial role in helping companies integrate AI governance into their corporate structures and policies.
- Data protection: The interplay between the AI Act and existing data protection regulations like GDPR will require careful navigation.
- Employment law: As AI systems increasingly impact workplace decisions, lawyers will need to address potential discrimination and fairness issues.
- Contract law: Drafting and negotiating AI-related contracts will require new considerations, including liability allocation and performance metrics.
- Regulatory compliance: Lawyers will be instrumental in helping clients navigate the new regulatory landscape, including interactions with supervisory authorities and participation in regulatory sandboxes.
Conclusion
The EU AI Act represents a pivotal shift in the regulatory landscape, balancing the promotion of innovation with the protection of fundamental rights and safety.
As legal practitioners, it is crucial to stay informed about the evolving interpretations and applications of the AI Act. This legislation will undoubtedly shape the future of AI development and deployment in the EU and beyond, influencing global standards and practices.
The coming years will likely see a surge in demand for legal expertise in AI-related matters. Law firms and in-house counsel should prioritize building competencies in this area, potentially through specialised training or dedicated AI law teams.
As we enter this new era of AI regulation, the legal profession has a vital role to play in ensuring the responsible and ethical development of AI technologies while safeguarding the interests of businesses and individuals alike.
Barry Scannell is a technology partner at William Fry