
End of Unregulated Artificial Intelligence: Europe Enforces World’s First AI Law
EU's sweeping regulatory model is poised to become a global benchmark --reshaping how artificial intelligence is governed, developed and deployed across borders.

The European Union made history last Friday by enforcing the EU Artificial Intelligence Act, the world’s first comprehensive legal framework for artificial intelligence. The legislation, which entered its second phase of enforcement on August 2, marks a global turning point in how AI is governed, deployed and held accountable.
“This landmark law ends the era of unregulated AI,” the European Parliament said in a statement. It described the Act as a critical step in ensuring that AI is made transparent, used safely and prevented from causing excesses or harm.
The AI Act introduces a risk-based approach to regulation, categorising AI systems based on their potential impact on safety, fundamental rights and public trust. Applications deemed to pose “unacceptable risk” – such as biometric surveillance in public spaces, social scoring and manipulative technologies targeting vulnerable populations – are now banned across the bloc.
Meanwhile, systems considered “high-risk,” such as those used in education, healthcare, law enforcement and employment, must meet stringent compliance requirements. These include technical documentation, human oversight protocols, and conformity assessments to ensure safety and fairness.
A central feature of the law is its regulation of general-purpose AI models -- the type used in chatbots, code generation, and content creation. Starting August 2, companies developing large-scale models must comply with new transparency rules, safety testing, incident reporting obligations and cybersecurity standards. This includes disclosing the nature of training data and putting in place risk mitigation plans for potential misuse.
The European Commission has also launched the AI Office, a new body tasked with monitoring implementation, guiding national regulators, and coordinating enforcement across member states. Companies found in breach of the regulation could face fines of up to €35 million or 7 per cent of global annual turnover, depending on the severity of the offence.
While the law was officially adopted in 2024, its provisions are being rolled out in stages. The first compliance deadline passed in February 2025, banning certain uses of AI deemed too dangerous to be permitted. The August 2 milestone activates obligations for developers of powerful foundation models, as well as rules on governance, transparency and systemic risk.
The regulation’s influence is already being felt worldwide. Some of the largest AI firms --including OpenAI, Google DeepMind and xAI -- have signalled support for portions of the EU’s newly introduced Code of Practice for General-Purpose AI, a voluntary framework intended to ease companies into full compliance before legally binding deadlines kick in by 2026–2027. However, Meta has so far declined to sign on, raising concerns about enforcement gaps and regulatory fragmentation.
Outside the EU, governments and industry observers are watching closely. Analysts say the Act could become the global benchmark for AI governance, much like the EU’s GDPR (General Data Protection Regulation) reshaped global data privacy laws in 2018. Countries including the United States, Canada, Brazil and Japan are now drafting their own frameworks, many borrowing elements from the EU approach.
Critics of the AI Act have warned that compliance costs may deter innovation, particularly for startups and open-source developers. However, Brussels insists the law strikes a balance between innovation and public interest. Special provisions and “regulatory sandboxes” have been created to allow smaller entities to experiment and scale with regulatory support.
Supporters say the law is not just about regulation -- it is about trust. By enforcing transparency, human oversight and accountability, the EU hopes to foster public confidence in AI at a time when the technology is increasingly shaping education, health care, governance, and democratic processes.
“This is not a law against AI,” said EU Commissioner Thierry Breton earlier this year. “It is a law for safe and trustworthy AI.”
With the EU AI Act now partially in force and more deadlines on the horizon, the global conversation around artificial intelligence is no longer centred on if it should be regulated -- but how quickly, how effectively, and by whom
For any enquiries please fill out this form, or contact info@thelawreporters.com Follow The Law Reporters on WhatsApp Channels