When EU policymakers reached a deal on the world’s first comprehensive AI law in December 2023, they celebrated it as a landmark moment for global tech regulation. But nearly two years later, that sense of triumph has faded. The bloc now faces internal backlash, implementation delays, and mounting concerns that its approach may hinder innovation rather than guide it.
A Rushed Breakthrough That Now Appears Fragile
Negotiators worked through the night to finalise the AI Act, a framework built to regulate artificial intelligence based on risk. The aim was simple in theory:
- ban harmful applications,
- tightly regulate high-risk systems,
- and allow low-risk AI to operate with fewer constraints.
The legislation was intended to set a global standard and reinforce Europe’s image as the world’s leading tech regulator. But the law’s complexity — combined with political pressure and last-minute changes — has since exposed major weaknesses.
ChatGPT Changed Everything — Too Late in the Process
When OpenAI released ChatGPT in late 2022, the technology instantly shifted global attitudes toward AI. EU negotiators were already deep into drafting the AI Act, and suddenly faced pressure to regulate large, general-purpose models like ChatGPT as well.
The original draft barely mentioned such systems, which were considered experimental at the time. In response to public and political pressure — including a high-profile open letter calling for a pause on advanced AI development — lawmakers scrambled to add new provisions.
Several officials involved say this last-minute rewrite destabilized the entire framework and pushed the law beyond what it was designed to handle.
Treating AI Like a Product, Not a Process
Legal experts argue that the commission’s early mistake was conceptual. Instead of treating AI as a constantly evolving process, the AI Act approached it like a static product — similar to elevators or consumer electronics.
But unlike a physical product, AI changes, adapts, and improves, making fixed regulatory requirements difficult to apply.
This mismatch has made compliance confusing and, for many companies, prohibitively expensive.
Businesses Say the Act Is Too Complex to Implement
When the law entered into force in 2024, it came with dozens of follow-up regulations, standards, and technical guidelines that were still unwritten. Companies have since complained about:
- vague timelines,
- unclear definitions,
- and the enormous administrative burden required to classify and monitor their AI systems.
Start-ups warn that the law strengthens the market dominance of large corporations — the ones best equipped to absorb compliance costs. Even Big Tech companies, including Meta and major European industrial groups, say the rules could discourage them from launching new AI tools in the EU.
Several major firms have urged Brussels to postpone the rollout.
Delays and Political Pressure
This week, the European Commission proposed delaying one of the most important parts of the AI Act — the high-risk AI rules — by at least a year. The move signals the clearest acknowledgment so far that the EU is struggling to implement its own legislation.
The shift also aligns with the bloc’s new political priorities. Since beginning her second term, Commission President Ursula von der Leyen has pushed competitiveness and investment, rather than regulation, as the heart of the EU’s digital strategy.
Debate Over Whether AI Regulation and Innovation Really Conflict
Critics say the AI Act risks making Europe less competitive in a global race dominated by the US and China. Supporters counter that the law merely codifies safety practices many AI labs already follow — and that innovation should not come at the expense of public safety.
Some worry that weakening the Act now could undermine Europe’s credibility as a regulator. Others argue that the legislation should be completely rethought.
A Broader Problem: Europe’s Innovation Gap
Several experts point out that the AI Act is only one piece of a much larger challenge. Europe still faces:
- fragmented markets,
- talent shortages,
- limited venture capital,
- and slow commercialization of research.
They argue that even perfect regulation will not fix Europe’s innovation deficit unless structural issues are addressed.
A Seat at the Global Table — For Now
Despite the setbacks, the EU’s AI Act has forced the world to take Europe seriously in the debate over AI governance. Other regions — including Japan, Brazil, and US states like California — have adopted similar transparency rules.
The question now is whether Europe can refine its legislation without losing its reputation as the main global authority on tech regulation — and without sacrificing its ambitions to build a competitive AI sector.
