EU AI Act - General Purpose AI Rules

Artificial Intelligence Act: fostering responsible AI development in Europe

Overview

The Artificial Intelligence Act (“AI Act”) is set to be published on the EU’s Official Journal soon, following the final approval of the Council of the EU on 21st May 2024. This landmark legislation aims to establish a regulatory framework for Artificial Intelligence (“AI”) across the European Union, promoting trustworthy and ethical development, deployment, and use of AI technologies. New rules will enter into force twenty days after the publication, with obligations then phased-in gradually over three years, more specifically:

  • Bans on prohibited practices, which will apply six months after the entry into force date;
  • Codes of practice, which will apply nine months after entry into force
  • General-purpose AI rules including governance, which will apply 12 months after entry into force
  • Obligations for high-risk systems, which will apply 36 months after the entry into force

The Significance of Codes of Practice

One crucial aspect of the AI Act involves the creation of Codes of Practice for General-Purpose AI (“GPAI”) models. These codes are fundamental for bridging the gap between the high-level requirements outlined in the AI Act for GPAI providers and the practical implementation of those requirements. In essence, they serve as a detailed roadmap for ensuring compliance with the principles enshrined in the new Regulation.

Concerns Regarding Stakeholder Involvement

On 8th July 2024, certain Members of the European Parliament (“MEPs”) expressed their concerns in a letter sent to EU’s AI Office urging to include civil society in the drafting of rules for powerful AI models. In particular, they argued against European Commission’s initial approach, which reportedly proposed to allow AI model providers to take the lead in drafting the codes, with civil society organizations (“CSOs”) playing a more limited consultative role.

MEPs expressed apprehension that such an approach could result in codes that prioritize industry interests over broader societal concerns. They advocate for an inclusive process that actively engages a diverse range of stakeholders, including:

  • Companies, as input from the AI development and deployment sectors is crucial for ensuring the codes are practical and workable.
  • Civil Society Organizations, which bring valuable perspectives on ethical considerations, potential biases, and the impact of AI on fundamental rights.
  • Academia, with researchers and experts offering insights into the latest advancements in AI technology and potential risks.
  • Other Stakeholders, considering that a diverse range of voices can contribute to well-rounded and comprehensive codes.

At the same time, civil society members highlight the potential for a situation where large technology companies write their own rules, potentially undermining AI Act’s goal of establishing equal and globally influential standards for GPAI development.

Looking Forward

The European Commission has acknowledged the need for clarity on stakeholders’ involvement. Details regarding the participation of CSOs and other stakeholders are expected to be included in a forthcoming call for expressions of interest. An external firm will be responsible for leading the drafting process, with the AI Office maintaining oversight and approving the final versions of the codes.

The coming months will be crucial in determining how the EU navigates stakeholders’ involvement in crafting the AI Act’s Codes of Practice. A transparent and inclusive process will be essential for establishing strong, effective, and ethically sound standards for trustworthy AI development across Europe.

Keep in touch!

Sign up for our newsletters!

Stay up-to-date on domestic and international legislative and tax news
and international, as well as all the Firm’s events and initiatives.

Back
to top