Skip to content

AI model providers now required to disclose information about their algorithms under new EU regulations

Regulation in the EU now necessitates AI model vendors to disclose details about their systems

AI Model Manufacturers Face Demand for Transparency Under New EU Regulations
AI Model Manufacturers Face Demand for Transparency Under New EU Regulations

AI Model Providers Now Required to Offer Transparency Under EU Regulations - AI model providers now required to disclose information about their algorithms under new EU regulations

The European Union (EU) has implemented new rules for providers of Artificial Intelligence (AI) models, effective from August 2025, as part of the EU AI Act adopted in May 2024. These regulations apply to providers such as ChatGPT and Gemini, among others.

Transparency:

Under the new rules, AI providers must maintain and submit detailed technical documentation, known as the Model Documentation Form (MDF), to regulators. This information includes the provenance, source, and composition of training, validation, and testing datasets. Although this information is confidential and protected by trade secret rules, it is accessible to regulatory authorities to enable oversight, compliance checks, and enforcement actions.

Providers are also required to publicly disclose a summary of the training data used, covering data sources and data processing aspects. This public summary aims to build trust while preserving proprietary confidentiality.

General-Purpose AI (GPAI) models are classified by risk level. Models with systemic risks, such as those trained with very large computational power, face stricter transparency and safety measures, including mandatory risk assessments and adversarial testing.

Data Usage:

The EU AI Act supports exercising and enforcing data protection and copyright rights by ensuring transparency over the data used in AI training. Providers must enable legitimate parties to exercise their rights under EU law regarding the data used by AI systems through publicly accessible information about data sources and processing.

There are requirements for assessing systemic risks related to data use, ensuring cybersecurity, reporting serious incidents, and disclosing relevant operational details, including energy consumption, to prevent harms like bias or misinformation.

Copyright Protection:

The EU AI Act imposes obligations related to copyright and related rights to ensure lawful use of protected content within training data. The Commission issued a voluntary GPAI Code of Practice offering guidance on compliance related to transparency and copyright matters, helping providers navigate these obligations practically.

The Act’s transparency provisions and documentation also support copyright enforcement by enabling authorities to audit and verify the legality of data usage in AI training.

Enforcement:

Private individuals can sue providers based on the AI Act, and the European Artificial Intelligence Authority will enforce the new AI rules from August 2026 for models introduced after that date, and from August 2027 for models that came onto the market before August 2, 2025. The European Commission can impose fines for violations of the AI Act, with potential penalties of up to 15 million euros or three percent of the total global annual turnover.

The new AI rules aim to balance innovation with fundamental rights, safety, and public trust, and the EU hopes to strengthen copyright with these new rules. However, some criticisms have been raised regarding the effectiveness of these measures, particularly concerning the lack of obligation to name specific data sets, domains, or sources.

[1] European Commission. (2023). EU AI Act: Boosting trust and enhancing use of AI while increasing transparency and accountability. Retrieved from https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12622-Artificial-Intelligence-Regulation [2] European Commission. (2023). AI Act: A landmark regulation for trustworthy AI. Retrieved from https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12622-Artificial-Intelligence-Regulation/document/13272/2023-01-31 [3] European Commission. (2023). AI Act: A landmark regulation for trustworthy AI - Frequently asked questions. Retrieved from https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12622-Artificial-Intelligence-Regulation/faq [4] European Commission. (2023). AI Act: A landmark regulation for trustworthy AI - Explanatory memorandum. Retrieved from https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12622-Artificial-Intelligence-Regulation/document/13272/2023-01-31/1

  1. The EU AI Act, effective from August 2025, includes policy provisions regarding transparency, data usage, copyright protection, and enforcement for providers of Artificial Intelligence (AI) models, such as ChatGPT and Gemini.
  2. Under the new regulations, AI providers must comply with data protection and copyright rights by ensuring transparency over the data used in AI training and taking necessary measures to prevent harms like bias or misinformation.
  3. The EU AI Act establishes policy-and-legislation guidelines for the use of artificial-intelligence, particularly in areas like technology and politics, and enforces these guidelines through policy-and-legislation enforcement agencies.

Read also:

    Latest