AI POLICY
The world’s first AI law is already glitching
The EU’s AI Act was meant to be a global first: a full set of rules to make AI safe and trustworthy across Europe.
When it was agreed in late 2023, officials celebrated after a long, difficult negotiation. Two years later, the picture is far less confident.
The law aims to ban the riskiest uses of AI, set strict rules for high-risk systems, and lightly regulate everything else.
But the act became complicated after lawmakers rushed to include general-purpose models like ChatGPT.
That change made the legislation harder to implement, and businesses say the details are still unclear.
This week, the European Commission delayed a major part of the act, the first public sign that Brussels is having trouble enforcing its own rules.
It has reopened a key question: did the EU act too quickly, and can these rules be fixed without slowing down Europe’s progress in AI?
Europe is trying to stay competitive with the US and China while keeping strong protections.
But start-ups say compliance costs are heavy, and large companies warn that the uncertainty makes them hesitant to roll out advanced AI tools in Europe.
Here’s what you should know:
Brussels delayed key parts of the AI Act after implementation proved difficult.
Both start-ups and big companies say the rules are unclear and costly to follow.
The EU is now deciding whether to adjust the law or give it more time to settle.
Big Tech isn’t thrilled either
At the same time, EU priorities have shifted toward boosting investment and growth. Some officials now want to rethink the act entirely.
Others say it’s too early to make big changes when the law has barely begun.
Even so, the AI Act has influenced global debates, with places like Japan, Brazil, and California adopting similar transparency ideas.
Europe still wants to set the standard, it just needs to find the right balance between regulation and innovation.
Imagine building a landmark AI law and then having to delay your own homework. - MG


