EU Releases Draft Code of Conduct for General-Purpose AI Systems

Giancarlo Mori
6 min readNov 29, 2024

--

Original Midjourney creation

On November 14, 2024, the European Union’s AI Office released the first draft of the General-Purpose AI (GPAI) Code of Practice. This Code aims to assist GPAI model providers in aligning with the EU AI Act, set to be enforced from August 2, 2025. Key areas addressed include transparency, copyright compliance, risk assessment, and mitigation strategies. The drafting process is iterative, with multiple rounds planned before finalization in April 2025. Stakeholders are encouraged to provide feedback by November 28, 2024, to refine the Code further.

The draft code establishes specific requirements for AI systems deemed to pose systemic risks, defined primarily through technical parameters. A key threshold is computational power exceeding 10²⁵ FLOPs during training — a metric that current AI models like GPT-4 have already surpassed, raising questions about the effectiveness of static technical thresholds.

Under the proposed rules, providers must notify the EU two weeks before beginning training of these high-compute models. This advance notification requirement represents a shift toward proactive oversight rather than reactive regulation.

Dual Documentation Structure

The framework introduces two interconnected regulatory documents that create a comprehensive paper trail of AI development and deployment:

Safety and Security Framework (SSF)

The SSF establishes the foundational risk management architecture, that is the essential structure and processes for effective risk management related to AI model development:

  • Systematic risk identification protocols across technical and societal dimensions
  • Technical specifications for model behavior modifications
  • Security protocols for protecting model weights and intellectual property
  • Assessment methodologies for ongoing safety evaluation

Safety and Security Report (SSR)

The SSR transforms theoretical guidelines into practical documentation, that is the architecture that encompasses systematic protocols for identifying risks across technical and societal dimensions:

  • Quantitative and qualitative risk analyses at multiple development stages
  • Empirical data on safety measure effectiveness
  • Technical methodology documentation
  • Comprehensive testing results and validation procedures

The relationship between these documents creates a feedback loop: SSF guidelines shape individual model documentation through SSRs, while accumulated SSR data informs updates to the overarching framework. This system aims to capture both compliance records and emerging patterns in AI development.

The technical requirements raise the following essential considerations:

  • Evaluation Standards: How will regulators establish standardized methods to assess the safety, fairness, and reliability of complex AI systems?
  • Transparency vs. IP Protection: How can providers meet transparency requirements while safeguarding intellectual property and sensitive data?
  • Documentation Expectations: What constitutes sufficient and actionable documentation for models that evolve rapidly or are updated continuously?
  • Interpretability Requirements: What level of interpretability is expected for black-box AI systems, and how will regulators address inherent limitations?
  • Risk Mitigation: How can providers ensure effective identification and mitigation of risks, particularly for general-purpose AI systems with diverse applications?

External Testing: A Novel Regulatory Approach

The draft introduces a significant expansion of oversight through mandatory external testing requirements. The AI Office and authorized third parties would conduct independent evaluations of GPAI models that meet systemic risk criteria. This represents a departure from the original AI Act, which primarily relied on internal testing protocols.

The external testing requirement raises several operational questions:

  • Technical Expertise: Advanced AI model assessments require specialized knowledge and skills.
  • Access Protocols: Testing proprietary systems poses challenges for security and IP protection.
  • Resource Allocation: Significant infrastructure investment is needed for compliance testing.
  • Bottlenecks: Limited resources and capacity could delay evaluations.
  • Compliance Costs: Testing frameworks and personnel create substantial financial burdens.

Industry Impact

For AI providers, external testing requirements create new operational considerations:

  • Development Delays: Pre-training notifications disrupt schedules and timelines.
  • Resource Allocation: Coordination for external testing demands significant resources.
  • IP Protection: Transparency requirements challenge the safeguarding of proprietary technologies.
  • Operational Costs: Compliance increases costs for testing and resource management.
  • Regulatory Complexity: Adhering to evolving standards adds administrative burdens.

Copyright Protection Framework

The draft code also introduces comprehensive copyright protection measures, marking a significant development in how AI systems interact with copyrighted content. These requirements aim to address one of the most contested areas in AI development: the use of existing content for model training.

Technical Standards and Implementation

The framework mandates support for the robots.txt protocol, an established web standard that allows content owners to specify crawling permissions. Through this implementation, content owners can explicitly define which materials are available for AI training, while providers receive clear signals about permissible training data. Importantly, the code maintains a distinction between search engine indexing and AI training permissions, preventing the use of robots.txt exclusions to affect content findability.

Content Exclusion and Piracy Prevention

The code requires providers to develop sophisticated content protection systems that go beyond basic compliance. These systems must actively screen for protected content and recognize copyright holders’ opt-out requests. A key component is the integration with the EU Commission’s Counterfeit and Piracy Watch List, requiring providers to exclude identified piracy websites from their crawling activities.

Rights Management Infrastructure

Providers must establish comprehensive rights management systems that include detailed documentation of content sources and permission status. This extends to creating clear procedures for handling rights holder inquiries and removal requests. The infrastructure must support timely processing of these requests while maintaining records for compliance verification.

Industry Implications

These copyright requirements create substantial operational challenges for AI providers. Companies must develop sophisticated content recognition systems while maintaining extensive documentation of their compliance efforts. This necessitates significant resource allocation for staff training, system development, and ongoing monitoring.

Implementation Timeline and Stakeholder Engagement

The draft code enters a critical phase of development through structured stakeholder consultation. The European Commission has organized four thematic working groups, involving approximately 1,000 stakeholders from various sectors. This collaborative approach aims to refine and specify the code’s requirements based on practical industry insights and technical feasibility.

EU AI Code of Conduct

The implementation process holds particular significance as the EU Commission maintains the authority to declare the code binding across the EU through an implementing act. This potential for mandatory enforcement adds weight to the current stakeholder discussions. If enacted, providers would need to demonstrate compliance either through adherence to the code or through “adequate alternative means” — though the specifics of these alternatives remain undefined.

The timeline for implementation raises several practical considerations. The AI Office, tasked with external testing of systemic risk models, must develop significant technical capabilities. This includes building expertise to evaluate complex AI systems while maintaining confidentiality of proprietary information. Similarly, providers face the challenge of adapting their development cycles to accommodate new testing requirements and documentation protocols. This could potentially be an area where dedicated AI systems become an aid in expediting the process in the future.

Implications for the AI Industry

The EU’s draft Code of Conduct represents a significant step toward practical AI governance. By establishing specific technical thresholds, documentation requirements, and testing protocols, it moves beyond broad principles to create actionable compliance frameworks.

Several key factors will influence its effectiveness:

  • The code’s ability to remain relevant amid rapid technological advancement, as evidenced by its computational threshold already being exceeded by current models.
  • The development of sufficient technical expertise within regulatory bodies to meaningfully assess complex AI systems.
  • The establishment of clear protocols for protecting intellectual property during mandatory external testing.
  • The practicality of implementing comprehensive copyright protection measures while maintaining efficient development processes.
  • The preservation of confidentiality and security for models under evaluation, ensuring sensitive data and proprietary architectures are not exposed, plagiarized, or misused.

As the code moves through stakeholder consultation and potential implementation, its impact will likely extend beyond the EU, potentially influencing global standards for AI development and deployment. The balance between innovation and oversight established by this framework may serve as a template for other regions grappling with AI governance challenges. It remains to be seen how specific countries, especially major players like China, will regard this as an opportunity to capitalize on a potentially slower pace of progress in other regions.

This development marks a crucial transition from theoretical discussions of AI regulation to practical implementation, setting precedents for how advanced AI systems will be developed, tested, and deployed in an increasingly regulated environment.

For those interested, you can find a list of FAQs on the new Code of Conduct here.

Keep a lookout for the next edition of AI Uncovered!

Follow on Twitter, LinkedIn, and Instagram for more AI-related content.

--

--

Giancarlo Mori
Giancarlo Mori

Written by Giancarlo Mori

Startup cofounder & CEO | Entrepreneur | Sr. Executive | Investor | AI, Technology, Media, and Crypto buff.

No responses yet