On November 14, 2024, the EU AI Office published a first draft of the General-Purpose AI Code of Practice (draft Code), the first of four drafting rounds that will conclude in April 2025. 

The Code provides best practices for “providers” of general-purpose AI (GPAI) models based on stakeholder input, international approaches, and the EU AI Act itself. The EU AI Act entered into force on August 1, 2024 and requires the final version of the Code to be ready by May 1, 2025.

The draft Code was put together by the Chairs and Vice-Chairs of the four EU AI Act Working Groups, made up of independent experts with backgrounds in computer science, AI governance and law. This first draft is intended to serve as a foundation for further refinement as the Working Groups aim to develop a final GPAI Code of Practice set for publication by May 1, 2025. 

Next week, the Chairs and nearly one thousand stakeholders will meet in working group meetings to discuss the draft, and participants will have until November 28, 2024 to submit written feedback. There will be additional discussion and drafting sessions held through April 2025. 

The first draft was guided by six key principles set by the Chairs and Vice-Chairs: 

  • Alignment with EU principles and values
  • Alignment with EU AI Act and international approaches
  • Proportionality to risks
  • Future-proofing
  • Proportionality to the size of the general-purpose AI model provider, and 
  • Support and growth of the AI safety ecosystem. 

Key components of the draft Code are summarized below. 

Rules for providers of GPAI models 

The draft Code provides guidance and best practices for complying with the EU AI Act’s obligations for providers of GPAI models, which focus on transparency requirements and compliance with Union law on copyright. Under the AI Act, a “provider” of a GPAI model is a party who develops a GPAI model, or that has a GPAI model developed, and places it on the market or puts it into service under its own name or trademark. Art. 3(3). 

Transparency 

Article 53(1)(a-b) of the EU AI Act provides transparency obligations for providers of GPAI models. The draft Code sets forth that providers should commit to drafting technical documentation for their GPAI models and keeping it up to date. The technical documentation should include general information about the model, intended and restricted/prohibited tasks, acceptable use policies, and testing processes and results, among other technical details, to advance public transparency. 

Copyright 

Article 53(1)(c) of the EU AI Act requires providers of GPAI models to put into place a policy to comply with Union law on copyright. According to the draft Code, this includes drawing up and implementing a copyright policy, undertaking reasonable copyright due diligence before entering into a contract with a third party about the use of data sets for development of GPAI models, and implementing reasonable downstream measures to mitigate risk that a system with which the GPAI model is integrated will generate copyright infringing output. 

The draft Code also lays out best practices for engaging in text and data mining, including measures such as only employing crawlers that read and follow instructions in accordance with robots.txt and not crawling piracy websites. 

Finally, the draft Code calls for commitment to transparency about measures taken to comply with Union law on copyright.  

Taxonomy of systemic risks 

The AI Act sets forth additional requirements for GPAI models that pose “systemic risks.”  The draft Code includes guidance on the types of risk that may meet this definition, including offensive cybersecurity risks, chemical/biological/radiological/nuclear risks, loss of control, automated use of models for AI research and development, persuasion and manipulation, and large-scale discrimination. 

The draft Code suggests that there may be other areas of systemic risks (eg, uses that may pose risk to public health). The draft Code further provides guidance around attributes that could lead to systemic risks, including dangerous model capabilities (eg, persuasion, manipulation and deception) and dangerous model propensities (including bias and confabulation). It also provides factors beyond model capabilities and propensities that may influence systemic risk including specific inputs, configurations, and contextual elements (eg, the ability to remove guardrails). 

Rules for providers of GPAI models with systemic risk

Article 55(1) of the EU AI Act provides obligations for providers of GPAI models with systemic risk. The draft Code indicates that measures taken to comply should be proportionate to the size and capacity of the provider and distribution strategies (eg, open sourcing).  The draft Code also proposes that a Safety and Security Framework (SSF) should be provided detailing risk management policies in order to assess and mitigate systemic risks. The components of the recommended SSF are summarized below. 

  • Risk assessment. As part of the SSF, providers should continuously and thoroughly identify systemic risks that may stem from their GPAI model with systemic risk. This includes conducting continuous and thorough analysis of pathways to systemic risks, as well as evidence collection. Risk assessment and evidence collection should be ongoing throughout the full lifecycle of the development and deployment of the GPAI model with systemic risk, including before and during training, and during and after deployment.
  • Risk mitigation. As part of the SSF, providers should include a mapping from each systemic risk indicator to necessary safety and security mitigations, in order to keep systemic risks below an intolerable level. The SSF should describe safety and security mitigations that will be implemented to mitigate systemic risk, as well as limitations of existing mitigations. A Safety and Security Report (SSR) should be created for each GPAI model with systemic risk which details risk assessment results and mitigations. The SSR should outline conditions for proceeding or not proceeding with further development or deployment and should form the basis of any development or deployment decisions made regarding the GPAI model.
  • Governance risk mitigation. The draft Code calls for commitment to ownership regarding systemic risk at all organizational levels, including at the executive and board levels. This includes conducting an internal assessment of adherence to the SSF, as well as independent expert assessments of the GPAI models with systemic risk throughout their lifecycle. Serious incidents should be tracked and reported to the AI office, with protections in place for whistleblowing. Transparency is once again a focus of the draft Code, with the governance risk mitigation including requiring appropriate public transparency. 

Key takeaways

The draft Code seeks to help providers of GPAI models comply with their obligations under the EU AI Act, understand GPAI models along the AI value chain, comply with EU copyright law, and continuously assess and mitigate possible systemic risks that stem from the development, placing on the market, or use of GPAI models with systemic risk.

Notably, the draft Code is caveated with the assumption that there will only be small numbers of GPAI models with systemic risk, and the Code may need to be changed significantly if that fact were to change. 

The draft Code includes many open questions which will be addressed and refined over the course of the next several months. The final GPAI Code of Practice, which is expected to be ready by May 1, 2025, will summarize best practices for providers of GPAI models when it comes to complying with the EU AI Act. 

[View source.]



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *