Tech giants reportedly weaken AI ethical guidelines: study findings
The US Government's Stonewalling Tactics Against EU's AI Regulations
Big Tech corporations are pulling strings behind the curtains to weaken the European Commission's Code of Practice on General Purpose AI - a crucial guideline for AI model providers to comply with the EU's AI Act, according to a report published by Corporate Europe Observatory (CEO) and LobbyControl this week.
These tech giants allegedly exploited their structural advantages during the drafting process, diluting the rules governing advanced AI, the groups found through internal interviews and analysis of lobby papers. Thirteen experts appointed by the European Commission were tasked to draft the Code, starting last September, using plenary sessions and workshops to gather feedback from a multitude of participants.
However, the report reveals that Big Tech companies were granted more access to the drafting process than other stakeholders. For instance, model providers - companies developing large AI models the Code will regulate - were invited to dedicated workshops with the working group chairs. The attendees ranged from US-based tech companies such as Google, Microsoft, Meta, Amazon, and OpenAI, as reported by the study.
In contrast, other stakeholders, including civil society organizations, publishers, and SMEs were limited in their participation, mostly confined to emoji-based upvoting of questions and comments via the online platform SLIDO.
Controversy has swirled around the drafting process, particularly from rights holders and publishers who fear the rules might contradict copyright law.
A spokesperson for the Commission admitted to receiving a letter from the US government's Mission to the EU, expressing opposition to the Code. Critics argue that this administration, led by Republican President Donald Trump, has been against the EU's digital rules, claiming they stifle innovation.
"The EU Commission's shortsighted obsession with 'simplification' and 'competitiveness' is creating an open door for aggressive Big Tech lobbying. The Code of Practice is only the beginning of the casualties from this single-minded focus on deregulation," said CEO researcher Bram Vranken.
Commission spokesperson Thomas Regnier stated that "no one received any 'structural favoring' throughout the process," and all participants had equal opportunities to engage through the same channels. Skeptics question the sincerity of this claim.
The final version of the Code, initially planned to be released early May, now faces a delay. The Commission has yet to comment on whether it will meet the rescheduled 2nd May deadline. The AI Office, however, promised that the final text will be published "ahead of August 2025," when the rules on general-purpose AI tools enter into force.
Note: This article has been updated to include a comment from the European Commission
Insights:
- The European Union stands on the brink of implementing its AI Act, a landmark regulation targeting the ethical and safety aspects of Artificial Intelligence (AI). The Code of Practice on General Purpose AI is crucial to assist AI model providers in complying with the EU's AI Act.
- Big Tech companies are allegedly exerting influence over the drafting process, attempting to weaken the regulations, which potentially poses a threat to the Act's effectiveness.
- The EU AI Act came into force on August 1, 2024, and is in a transitional phase, with key implementations scheduled for mid to late 2025. High-risk AI systems in sectors such as healthcare, hiring, and finance will face stricter requirements from August 2, 2026.
- The EU is establishing governance bodies, including the European Artificial Intelligence Board and national authorities, to enforce the AI Act and provide regulatory oversight against undue industry influence.
- The Big Tech corporations, such as Google, Microsoft, Meta, Amazon, and OpenAI, are reportedly using artificial-intelligence and their structural advantages to weaken the European Commission's Code of Practice on General Purpose AI, which is essential for AI model providers to comply with the EU's AI Act.
- The EU's AI Act aims to regulate the ethical and safety aspects of Artificial Intelligence (AI), and the technology sector, including Big Tech companies, is allegedly exerting influence over the drafting process to potentially weaken the regulations, which could compromise the Act's effectiveness.


