The European Union has taken a significant step toward regulating artificial intelligence with the publication of a draft Code of Practice. This initial version targets providers of general-purpose AI models, offering guidance to comply with the EU AI Act’s obligations.
Enacted this summer, the EU AI Act establishes rules for the development and deployment of AI systems under a risk-based framework. The draft Code addresses transparency, systemic risk management, and more, aiming to ensure responsible AI innovation in Europe.
The draft Code is specifically designed for providers of general-purpose AI models. These models are characterized by having been trained using computing power exceeding 10^25 FLOPs. The EU AI Act imposes stringent requirements on such powerful AI systems to mitigate “systemic risks.” Transparency is a key focus, with the Code mandating that AI makers detail their risk management policies and continuously identify potential systemic risks.
In addition to identifying explicitly listed systemic risks, the Code encourages AI providers to recognize other potential threats, including large-scale privacy infringements and surveillance concerns. The framework emphasizes the importance of thorough and continuous risk assessment, requiring general-purpose AI makers to include “best effort estimates” for when they might develop models that trigger systemic risk indicators.
The “Safety and Security Framework” (SSF) within the draft Code outlines specific requirements for risk assessment and mitigation. This framework is crucial for ensuring that AI systems are developed responsibly and do not pose undue risks to society. Furthermore, the Code addresses the handling of copyrighted material within general-purpose AI models, a critical consideration in the digital age.
Feedback on the draft Code is open until November 28, providing stakeholders an opportunity to contribute to its refinement. The goal is to finalize a more detailed version by May 1, 2025. This timeline aligns with the EU AI Act’s transparency requirements for general-purpose AI models, set to take effect on August 1, 2025.
“This is where this Code of Practice will come in,” noted Natasha, a senior reporter for TechCrunch, emphasizing the Code’s role in guiding compliance with the EU AI Act. Non-compliance with these regulations may result in enforcement actions, making adherence crucial for AI providers.
General-purpose AI models will face even stricter rules 36 months after the EU AI Act’s entry into force, or by August 1, 2027, aiming to further mitigate systemic risks. These measures reflect Europe’s commitment to fostering safe and ethical AI development.