StocksUS Markets

Analysis: Tech Giants Strive to Weaken Europe’s AI Act, Reuters Reports

By Martin Coulter

LONDON (Reuters) – Major technology firms are making a concerted effort to convince the European Union to adopt a more lenient approach to artificial intelligence regulation as they try to avoid potentially massive fines.

In May, EU lawmakers reached an agreement on the AI Act—the first comprehensive set of regulations for artificial intelligence globally—after extensive negotiations among various political factions.

However, the specifics of how the rules will be applied to “general purpose” AI (GPAI) systems, such as ChatGPT, remain uncertain until the associated codes of practice are finalized. This ambiguity leaves companies facing the possibility of copyright litigation and significant financial penalties.

The EU has called on businesses, academics, and other stakeholders to contribute to drafting the code of practice, receiving nearly 1,000 applications—an unusually high number, according to a source familiar with the situation who requested anonymity.

While the forthcoming AI code of practice will not have legal binding power when launched next year, it will serve as a guideline for companies to demonstrate compliance. Failing to adhere to the code while claiming to follow the regulations could expose a company to legal challenges.

"The code of practice is essential. If executed well, it will allow us to continue innovating," said Boniface de Champris, a senior policy manager at the trade organization CCIA Europe, which represents companies like Amazon, Google, and Meta. "If it becomes overly restrictive, it will pose challenges."

DATA SCRAPING

Firms such as Stability AI and OpenAI have faced scrutiny regarding the use of copyrighted materials, like popular books or image collections, to train their AI models without permission from their creators.

According to the AI Act, companies must provide "detailed summaries" of the data utilized for training their AI models. In practice, content creators may pursue compensation if they discover their work was used without authorization, although this is yet to be legally tested.

Some business leaders argue that the required summaries should offer minimal information to protect trade secrets, while others contend that copyright holders deserve to know if their content was used without consent. OpenAI has faced criticism for its lack of transparency regarding the data used to train its models and has also applied to join the relevant working groups.

Google has confirmed its application, and Amazon expressed hope to leverage its expertise to ensure the code’s effective implementation.

Maximilian Gahntz, AI policy lead at the Mozilla Foundation, raised concerns about companies’ apparent attempts to evade transparency obligations. "The AI Act is a critical opportunity to clarify this important matter and shed light on the so-called black box of AI," he stated.

BIG BUSINESS AND PRIORITIES

There is criticism from some sectors that the EU is prioritizing tech regulation over innovation, leading those drafting the code to seek a balanced approach.

Recently, former European Central Bank president Mario Draghi emphasized that the bloc needs a more coordinated industrial policy, quicker decision-making, and significant investments to compete with China and the United States.

Thierry Breton, a prominent advocate for EU regulation and a critic of non-compliant tech firms, recently resigned as European Commissioner for the Internal Market after disagreements with Ursula von der Leyen, the president of the EU’s executive body.

As protectionist sentiments rise within the EU, domestic tech companies are hoping for regulations that will accommodate the needs of emerging European startups.

"We’ve insisted that these obligations should be manageable and, if possible, tailored for startups," said Maxime Ricard, policy manager for Allied for Startups, a network representing smaller tech firms.

Tech companies will have until August 2025 to align their compliance efforts with the code once it is published early next year.

Non-profit organizations, including Access Now, the Future of Life Institute, and Mozilla, have also applied to assist in drafting the code. Gahntz cautioned, "As we move forward with detailing the obligations outlined in the AI Act, we need to ensure that major AI corporations do not undermine essential transparency requirements."

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker