Guidelines for Responsible Use of Artificial Intelligence by the GPAI Initiative
The European Union has established a comprehensive regulatory framework for Artificial Intelligence (AI) systems, with a specific focus on General-Purpose AI (GPAI) models. Effective since February 2025, the AI Act introduces a risk-based regulatory regime for AI systems, including GPAI models [1][2].
### The Regulatory Landscape for GPAI Models under the EU AI Act
Under the AI Act (Regulation (EU) 2024/1689), GPAI models are subject to several obligations aimed at ensuring safety, transparency, and legal compliance. These include preparing detailed technical documentation, supplying essential information to downstream users, respecting the EU Copyright Directive, making training data transparency, and considering systemic risks [1][2]. Models exceeding specific computational thresholds are deemed to pose systemic risks, triggering additional requirements [2].
The AI Act also imposes obligations on various actors, including providers, deployers, importers, and manufacturers, and mandates ensuring a sufficient level of AI literacy among personnel involved with AI systems [1].
### The General-Purpose AI Code of Practice: A Compliance Aid
Published on July 10, 2025, the General-Purpose AI Code of Practice is a voluntary framework developed through a multi-stakeholder process to help industry actors comply with the AI Act’s legal obligations concerning GPAI models related to safety, transparency, and copyright [3][4].
Key aspects of the Code of Practice include facilitating compliance, reducing administrative burden, increasing legal certainty, and undergoing evaluation by EU Member States and the European Commission [3]. Providers of GPAI models who opt to sign the Code must complete a formal signature process with the European Commission, signaling their commitment to meeting the AI Act requirements through this standardized approach [3].
Signatories of the Code are expected to benefit from clearer legal status regarding their adherence to AI regulations and a reduction in the administrative load compared to proving compliance through other methods [3]. The Code is also set to be complemented by official guidelines on GPAI-related concepts shortly after publication [3].
In addition, providers must exclude content from a dynamic list of websites created by EU authorities and respect rights reservations, including machine-readable reservations or other appropriate methods [1]. The Transparency chapter defines the specific information that providers must document and introduces a Model Documentation Form to help compile the relevant information [1].
The General-Purpose AI Code of Practice provides a practical compliance tool that reduces complexity and legal uncertainty for providers willing to adopt it [1][2][3][4]. If the systemic risk is unacceptable, providers must implement measures to mitigate the risk until the acceptance criteria are met [1]. Providers must also monitor GPAI models before and after release and allow external evaluators [1].
To mitigate the risk of copyright infringement, providers must implement appropriate technical safeguards and prohibit copyright-infringing uses [1]. The Copyright Chapter outlines the requirements for a provider's copyright policy, including reproducing and extracting copyright-protected content without circumventing effective technological measures [1].
The GPAI provider must designate a point of contact for affected rightsholders and implement a complaint mechanism for non-compliance [1]. The Safety and Security chapter focuses on the specific obligations of providers of GPAI models posing systemic risk, providing a list of default systemic risks, including chemical, biological, radiological and nuclear risks; loss of control; cyber offenses; and harmful manipulation risks [1].
The Transparency chapter of the Code of Practice outlines how providers must make information available to the AI Office and national competent authorities upon request [1]. GPAI models posing a systemic risk are subject to additional safety and security obligations under the AI Act [1]. The Commission will publish guidelines to clarify the rules for GPAI providers, including the classification of models as GPAI and the classification of GPAI with systemic risks [1].
In conclusion, the European Union's AI Act establishes a risk-based regulatory framework for AI systems, including a specific regime for GPAI models. The General-Purpose AI Code of Practice serves as a voluntary framework to help providers comply with the AI Act’s legal obligations, reducing complexity and legal uncertainty for providers willing to adopt it. However, the Commission still needs to provide the template for the public disclosure of information about training data in accordance with Art. 53(1)(d) [1]. The Commission has also informally recognized a one-year grace period for GPAI providers acting in good faith and willing to collaborate for full compliance [1]. The Commission appears to be heading in the right direction towards effective AI regulation, but there is still a long way to go [1].
The General-Purpose AI Code of Practice, a voluntary framework aimed at helping industry actors comply with the EU AI Act's legal obligations, highlights the importance of respecting fundamental rights. With the mandatory implementation of appropriate technical safeguards to prevent copyright infringement, the Code underscores the role of technology in upholding these rights. (General-Purpose AI Code of Practice, rights, technology)
Under the EU AI Act, providers must exclude content from a dynamic list of websites created by EU authorities, demonstrating how technology implementation is key to safeguarding individual privacy and maintaining compliance within the legal framework. (EU AI Act, technology, privacy)