The European Union is so far the only jurisdiction globally to drive forward comprehensive rules for artificial intelligence with its AI Act.
Jaque Silva | Nurphoto | Getty Images
The European Union formally kicked off enforcement of its landmark artificial intelligence law Sunday, paving the way for tough restrictions and potential large fines for violations.
The EU AI Act, a first-of-its-kind regulatory framework for the technology, formally entered into force in August 2024.
On Sunday, the deadline for prohibitions on certain artificial intelligence systems and requirements to ensure sufficient technology literacy among staff officially lapsed.
That means companies must now comply with the restrictions and can face penalties if they fail to do so.
The AI Act bans certain applications of AI which it deems as posing “unacceptable risk” to citizens.
Those include social scoring systems, real-time facial recognition and other forms of biometric identification that categorize people by race, sex life, sexual orientation and other attributes, and “manipulative” AI tools.
Companies face fines of as much as 35 million euros ($35.8 million) or 7% of their global annual revenues — whichever amount is higher — for breaches of the EU AI Act.
The size of the penalties will depend on the infringement and size of the company fined.
That’s higher than the fines possible under the GDPR, Europe’s strict digital privacy law. Companies face fines of up to 20 million euros or 4% of annual global turnover for GDPR breaches.
‘Not perfect’ but ‘very much needed’
It’s worth stressing that the AI Act still isn’t in full force — this is just the first step in a series of many upcoming developments.
Tasos Stampelos, head of EU public policy and government relations at Mozilla, told CNBC previously that while it’s “not perfect,” the EU’s AI Act is “very much needed.”
“It’s quite important to recognize that the AI Act is predominantly a product safety legislation,” Stampelos said in a CNBC-moderated panel in November.
“With product safety rules, the moment you have it in place, it’s not a done deal. There are a lot of things coming and following after the adoption of an act,” he said.
“Right now, compliance will depend on how standards, guidelines, secondary legislation or derivative instruments that follow the AI Act, that will actually stipulate what compliance looks like,” Stampelos added.
In December, the EU AI Office, a newly created body regulating the use of models in accordance with the AI Act, published a second-draft code of practice for general-purpose AI (GPAI) models, which refers to systems like OpenAI’s GPT family of large language models, or LLMs.
The second draft contained exemptions for providers of certain open-source AI models while including the requirement for developers of “systemic” GPAI models to undergo rigorous risk assessments.
Setting the global standard?
Several technology executives and investors are unhappy with some of the more burdensome aspects of the AI Act and worry it might strangle innovation.
In June 2024, Prince Constantijn of the Netherlands told CNBC in an interview that he’s “really concerned” about Europe’s focus on regulating AI.
“Our ambition seems to be limited to being good regulators,” Constantijn said. “It’s good to have guardrails. We want to bring clarity to the market, predictability and all that. But it’s very hard to do that in such a fast-moving space.”
Still, some think that having clear rules for AI could give Europe leadership advantage.
“While the U.S. and China compete to build the biggest AI models, Europe is showing leadership in building the most trustworthy ones,” Diyan Bogdanov, director of engineering intelligence and growth at Bulgarian fintech firm Payhawk, said via email.
“The EU AI Act’s requirements around bias detection, regular risk assessments, and human oversight aren’t limiting innovation — they’re defining what good looks like,” he added.
https://www.cnbc.com/2025/02/03/eu-kicks-off-landmark-ai-act-enforcement-as-first-restrictions-apply.html