- within Consumer Protection, Family and Matrimonial and Privacy topic(s)
In a groundbreaking move, the European Union has passed the EU Artificial Intelligence Act, the world's first comprehensive legal framework for AI. Set to come into full effect in 2026, the Act is poised to reshape how AI systems are developed, deployed, and governed across the EU, with a focus on safety, transparency, and accountability.
The Act is a risk-based approach to AI regulation, categorising systems according to the potential harm they pose to individuals and society. This ensures that the higher the risk, the stricter the rules.
Risk Categories under the AI Act
The legislation defines four levels of risk for AI systems:
- Unacceptable Risk: These systems are banned outright, which include:
- AI used for social scoring (similar to China's surveillance model),
- Real-time remote biometric identification (such as facial recognition) in public spaces for law enforcement;
- Emotion recognition in workplaces and schools;
- Manipulative or deceptive AI techniques, and
- Untargeted scraping of the internet or surveillance footage to build biometric databases.
- High Risk: These systems are allowed but heavily regulated. Examples include Ai used in:
- Critical infrastructure (e.g., transport);
- Education and employment (e.g., screening);
- Healthcare and law enforcement (e.g., predictive policing), and
- Systems affecting fundamental rights.
High-risk AI systems must meet strict requirements: ensuring human oversight, robust risk management, transparency, and cybersecurity, along with a clear documentation and registration process.
- Limited Risk: these systems must meet minimal transparency obligations. For example, chatbots must disclose that users are interacting with AI.
- Minimal Risk: The majority of AI applications, like spam filters or video game AI, fall under this category and face no additional obligations.
Why This Matters
AI is becoming an essential part of daily life, from healthcare and education to finance and public servers. However, without proper safeguards AI can also be used to manipulate, discriminate, or surveil, often without people even realising it. The EU AI Act is designed to protect citizens' rights, ensure ethical AI development, and build public trust.
A key principle of the Act is that AI systems should serve people, not control them. Parliament strived to ensure that AI in the EU is not only safe and reliable, but also transparent, traceable, non-discriminatory, and environmentally sustainable. Importantly, the law emphasises human oversight to prevent harmful automation.
Impact and Future Outlook
The EU AI Act sets a global precedent. As the first binding AI law, it could become a blueprint for other regions and influence how AI is regulated worldwide. It will also shape how tech companies both inside and outside the EU, design and market their AI tools, as compliance will be required for any AI system operating in the EU market.
In summary, the EU AI Act marks a bold step towards responsible AI governance. By balancing innovations with fundamental rights, the EU aims to lead the world in ethical AI development, ensuring that technological progress does not come at the cost of human dignity or democratic values.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.