I. Introduction
The Artificial Intelligence Act ("Act"), the first comprehensive legal framework on artificial intelligence prepared by the European Union ("EU"), was adopted by the European Parliament on 13 March 2024 and officially approved by the EU on 21 April 2024. The Act was subsequently published in the Official Journal of the EU and entered into force on 1 August 2024.
As developments in the field of artificial intelligence, particularly at the intersection with law, continue to emerge worldwide, we believe it is timely and valuable to take a closer look at the key provisions introduced by the Act. This review is particularly relevant as certain significant provisions of the Act will enter into application on 2 August 2025, nearly one year after its entry into force.
You may access the full text of the Act here.
II. Definition, Purpose, and Scope
Under the Act, artificial intelligence ("AI") refers to a machine-based system designed to operate with varying levels of autonomy and capable of producing outputs, such as predictions, content, recommendations, or decisions, that influence physical or virtual environments based on the inputs it receives.
According to Article 1 of the Act, the primary objectives are to improve the functioning of the internal market in alignment with EU values; to promote the adoption of human-centric and trustworthy AI systems; to ensure a high level of protection for health, safety, and fundamental rights, including democracy, the rule of law, and environmental protection as emphasized in the Charter of Fundamental Rights of the European Union; to provide safeguards against the potential harmful effects of AI systems across the EU; and to support innovation.
The Act establishes a comprehensive legal framework aimed at ensuring that AI systems are safe, transparent, and reliable. While protecting the rights and freedoms of individuals, the Act also aims to create a global standard for AI governance. Within this framework, the Act introduces a wide range of obligations relating to risk assessment, transparency measures, and compliance with fundamental rights, particularly for high-risk AI applications.
The Act seeks to establish globally aligned rules governing the use of AI systems and introduces several significant regulatory mechanisms. These include:
Harmonized rules are determined for placing AI systems on the EU market, integrating them into services, and enabling their use within EU territory. To support effective enforcement, the EU AI Office and the AI Board were established, becoming operational as of August 2025. The EU AI Office comprises of five specialized units: "AI and Robotics Excellence", "Regulation and Compliance", "AI Safety", "AI Innovation and Policy Coordination", and "AI for Societal Good". Its mandate includes supporting the implementation of the Act, enforcing AI-related rules, strengthening the development and deployment of trustworthy AI, and promoting international cooperation. The AI Board, composed of representatives of EU Member States and supported by the EU AI Office within the European Commission, acts as an advisory body. Its core role is to facilitate coordination among national authorities, foster the exchange of technical and regulatory expertise, provide policy and innovation recommendations, and ensure the consistent and effective implementation of the Act across the EU.
- Certain AI practices that deemed to pose an "unacceptable risk" have been prohibited outright. As of 2 February 2025, the use of such AI applications has been strictly banned, and the European Commission has published updated guidelines clarifying the scope of these prohibitions.
- For high-risk AI systems, the Act introduces specific compliance obligations for both providers and deployers. By 2 February 2026, official guidelines are expected to be issued, and mandatory conformity assessments, technical documentation requirements, and regulatory audits will commence.
- Harmonized transparency obligations are imposed on certain AI systems. In particular, for "limited-risk" systems, such as deepfakes or AI-generated content, it is mandatory to inform users when the content has been generated by AI. The Act further mandates the publication of a guidance document regarding the implementation of transparency rules; however, this document has not yet been published by the European Commission.
- For general-purpose AI models ("GPAI"), the Act introduces harmonized rules for their deployment in the EU market. On 22 April 2025, the Commission published preliminary guidelines for GPAI providers, covering topics such as model definitions, supplier responsibilities, dataset summaries, copyright compliance, and systemic risk assessments. As of 2 August 2025, GPAI providers entering the EU market are required to comply with the Act's transparency and copyright obligations. GPAI models that were placed on the market before 2 August 2025 must achieve full compliance by 2 August 2027.
- The Act establishes detailed rules regarding market surveillance, regulatory oversight, governance, and sanctions.
- Measures are also introduced to foster innovation while ensuring adherence to strict regulatory standards.
According to Article 2, the Act's scope of application extends broadly. It applies to:
- Providers placing AI systems on the EU market or making them available, regardless of whether they are established in the EU or in a third country,
- Deployers of AI systems who are established or operating within the EU,
- Providers and deployers established outside the EU where the output of the AI system is used within EU territory,
- Importers and distributors of AI systems,
- Manufacturers placing products under their own name or trademark on the market where those products incorporate AI systems; and
- Authorized representatives of providers not established in the EU.
This extraterritorial effect demonstrates that the Act is not limited to the EU but is intended to influence AI regulation globally, impacting a broad range of stakeholders.
Finally, the Act does not apply to AI systems that are placed on the market, made available, or used exclusively for military, defense, or national security purposes.
III. A Retrospective on the Implementation of the Act on Its Anniversary
- The Act entered into force on 1 August 2024, establishing a comprehensive legal framework for artificial intelligence within the EU. Since then, several key milestones have shaped the implementation process.
- On 2 February 2025, provisions concerning prohibited AI practices and AI literacy came into effect.
- As of 2 August 2025, transparency, safety, and copyright compliance obligations for general-purpose AI systems began to apply.
- Civil society organizations have criticized the Act's lack of transparency, lobbying influences, and biometric surveillance practices in certain Member States — particularly Hungary.
- To support providers in achieving compliance, the European Commission has published guidelines, implementation manuals, and transparency templates, some of which are also referenced in this note.
- Newly established institutions, such as the EU AI Office, have become operational, and ongoing efforts focus on developing compliance standards, guidelines, and technical tools for AI regulation.
- The EU seeks to enhance its technological competitiveness through gigafactories, investment funds, and international collaborations.
- With the enforcement of the Act, copyright protection, transparency, and explainability have emerged as top priorities for user rights and public oversight.
- By 2 August 2026, the Act's provisions are expected to become fully applicable.
- Mandatory compliance for high-risk AI systems will take effect in 2027.
The success of the implementation process will depend heavily on regulatory transparency, the effectiveness of oversight mechanisms, and the active involvement of civil society.
Across the EU, several Member States have also taken steps to align their national legal frameworks with the Act:
- Spain: Spain is actively developing a comprehensive national AI regulatory framework designed to complement the Act and establish a local enforcement regime. On 11 March 2025, the Council of Ministers approved the first draft of the "Preliminary Law for the Good Use and Governance of AI" (Anteproyecto de Ley para el Buen Uso y la Gobernanza de la IA). Additionally, the Council approved a draft law imposing fines of up to EUR 35 million on AI providers that fail to properly label AI-generated content.
- Finland: Finland has yet to bring its national legislation into force, which will cover the designation of notified bodies, the responsibilities of national authorities, and penalties for non-compliance. On 8 May 2025, the government submitted a supporting legislative proposal to Parliament, which remains under review. Consequently, as of 2 August 2025, the following provisions will not yet be implemented in Finland: (i) sanctions for violations of the Act; (ii) regulatory audits conducted by national authorities; and (iii) procedures for the designation of notified bodies.
- Denmark: Denmark has become the first EU Member State to adopt national legislation fully implementing the Act, positioning itself as a regulatory leader. On 8 May 2025, the Danish Parliament adopted a comprehensive framework law establishing the governance structure necessary for implementing the Act, setting an example for other Member States facing the 2 August 2025 deadline. The adopted legislation organizes the supervisory mechanism around three key authorities: (i) the Digital Government Agency (Digitaliseringsstyrelsen), (ii) the Danish Data Protection Authority (Datatilsynet), and (iii) the Danish Court Administration (Domsstolsstyrelsen).
- Poland: On 10 February 2025, Poland published the second draft of its national AI legislation, designed to implement the Act. This draft reflects changes made based on public consultation feedback received during the initial proposal phase.
IV. Prohibited AI Practices
Article 5 of the Act sets out the categories of AI practices that are considered incompatible with EU values and are therefore strictly prohibited. In addition, the "Commission Guidance on Prohibited AI Practices under the AI Act" ("Guidance"), published on 4 February 2025, provides further clarification and specifies the exact scope of these prohibitions. According to the Act and the Guidance, the following AI systems are explicitly banned:
- AI systems using manipulative or intentional techniques designed to distort the behavior of a person or group in a manner that significantly impairs their decision-making autonomy and creates a risk of harm,
- AI systems exploiting vulnerabilities related to age, disability, or social and economic circumstances in order to distort the behavior of a person or group, thereby creating a significant risk of harm,
- AI systems generating social scoring by evaluating or classifying individuals or groups based on their social behavior or personal characteristics,
- AI systems developed, marketed, made available, or used to predict an individual's likelihood of committing a crime through personal profiling or personality assessment,
- AI systems collecting random facial images from the internet to create or expand facial recognition databases. According to the Guidance, for such systems to fall under the prohibition, the following conditions must be met: (i) a remote biometric identification system must be used, (ii) the system must be actively deployed, (iii) the deployment must occur in real time, (iv) it must be used in publicly accessible spaces, and (v) its purpose must relate to law enforcement activities,
- AI systems used to infer emotions of individuals in workplaces or educational institutions for purposes unrelated to health or safety,
- AI systems classifying individuals based on biometric data to infer race, political opinions, trade union membership, religious or philosophical beliefs, sexual life, or sexual orientation, except where biometric data is lawfully processed for filtering or categorization purposes by law enforcement authorities,
- AI systems used by law enforcement authorities for real-time remote biometric identification in publicly accessible spaces, except in limited cases involving targeted searches for specific victims, prevention of imminent threats, or identification of specific criminal suspects.
You may access the full text of the Guidance here.
V. High-Risk AI Systems
One of the core focuses of the Act is the regulation of high-risk AI systems. These systems are addressed comprehensively under Title III of the Act, which sets out rules for AI applications that pose significant risks to safety, health, or fundamental rights.
Under Article 6(1), an AI system that is placed on the market or made available independently will be classified as high-risk if it meets both of the following conditions:
The AI system is intended to be used as a safety component of a product covered by EU harmonization legislation listed in Annex I, or it is itself a product falling within the scope of that legislation; and
The AI system is designed as a safety component of a product which, in accordance with the harmonization legislation listed in Annex I, is subject to a third-party conformity assessment before being placed on the market or put into service, or the AI system is itself such a product.
If these two cumulative criteria are met, the AI system will be classified as high-risk.
In addition to these conditions, the Act explicitly lists in Annex III a range of specific AI systems that are automatically considered high-risk due to their potential impact on individual rights and societal interests. These include:
- Remote biometric identification systems and AI systems used for biometric classification based on sensitive or protected attributes,
- AI systems designed for emotion recognition,
- AI systems used as safety components in the operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating, or electricity,
- AI systems used in educational settings, including those for determining access to educational institutions, assessing learning outcomes, assigning individuals to appropriate education levels, or monitoring prohibited conduct during examinations,
- AI systems used in recruitment or employment, particularly those applied to evaluate candidates, make decisions on promotions or terminations, assess performance, or determine employment-related conditions,
- AI systems used by public authorities to assess individuals' eligibility for public benefits, or to reduce, withdraw, or recover such benefits,
- AI systems determining individuals' credit scores, except when used solely for detecting financial fraud,
- AI systems used for risk assessment and pricing in life or health insurance,
- AI systems evaluating individuals' likelihood of becoming victims of crime or assessing the reliability of evidence in criminal investigations or prosecutions,
- AI systems used for assessing eligibility in asylum, visa, or residence permit applications,
- AI systems assisting judicial authorities in legal research, statutory interpretation, or application of the law,
- AI systems designed to influence elections or referenda, or to manipulate voting behavior.
VI. Obligations of Providers of High-Risk AI Systems
The Act also sets out a detailed framework of obligations for providers of high-risk AI systems. These obligations primarily target those who develop, place on the market, or make available such systems, aiming to ensure their safe, ethical, and legally compliant use within the EU.
Under the Act, the key obligations imposed on providers of high-risk AI systems include the following:
- In accordance with Article 9, providers must establish and maintain a risk management system. This system should identify, evaluate, and document all foreseeable risks to health, safety, and fundamental rights, and retain such documentation for audit purposes.
- Pursuant to Article 10, providers must ensure the use of high-quality, verified, and tested datasets when training high-risk AI systems, thereby reducing risks of bias and inaccuracies.
- Under Article 11, providers are required to prepare and maintain technical documentation for their systems. These documents must demonstrate compliance with the Act's requirements and present clear and accessible information for review by national authorities.
- According to Article 12, high-risk AI systems must be designed to enable automatic recording of events (logging) throughout their lifecycle, allowing effective traceability and monitoring.
- Under Article 13, high-risk AI systems must be developed in accordance with the principle of transparency. Providers must ensure that clear, accurate, and comprehensive information about the system is available in a digital format.
- Pursuant to Article 14, high-risk AI systems must be designed and developed to allow effective human oversight, including appropriate human-machine interfaces. Human oversight mechanisms should aim to prevent misuse and minimize risks to health, safety, and fundamental rights.
- High-risk AI systems must also comply with the principles of accuracy, robustness, and cybersecurity. Providers are responsible for ensuring that these standards are consistently maintained throughout the system's operational lifecycle.
- As required by Article 17, providers must establish and maintain a quality management system. This system should include documented policies, procedures, and instructions, ensuring that compliance is monitored and maintained over the entire lifecycle of the AI system.
From a regulatory implementation perspective, as of 2 August 2025, each EU Member State is required to designate or establish at least one notifying authority responsible for setting up and managing procedures relating to the appointment, notification, and monitoring of conformity assessment bodies. Looking ahead, 2 February 2026 marks the expected publication of official guidelines for high-risk AI systems, initiating mandatory conformity assessments, technical documentation reviews, and other compliance audits. By 2 August 2027, the classification rules for high-risk AI systems, as well as the corresponding obligations for providers, will come fully into effect.
VII. Measures to Support Innovation in the Field of Artificial Intelligence
Article 57 of the Act imposes an obligation on EU Member States to establish at least one regulatory sandbox for artificial intelligence at the national level and to ensure that these sandboxes become fully operational by 2 August 2026. In this context, Spain became the first Member State to launch an AI regulatory sandbox following the adoption of the Act in 2024. The Spanish program began in May 2025 with a series of workshops led by AI and legal experts. These regulatory sandboxes are controlled testing environments designed for the development, training, testing, and validation of innovative AI systems before they are placed on the market. Their primary objectives are to enhance legal certainty for AI developers, foster innovation and competition, strengthen oversight by national authorities, and facilitate market access for AI systems within the EU.
Within these sandboxes, national competent authorities are responsible for providing guidance, supervision, and support. Their role includes assisting participants in identifying and assessing risks to fundamental rights, health, and safety, testing and implementing mitigating measures; and ensuring that such measures are aligned with the Act's obligations and requirements.
VIII. Penalties
Title XII of the Act regulates the sanctions and administrative fines applicable to behaviors and actions that violate the provisions of the Act. It explicitly emphasizes that penalties must be effective, proportionate, and dissuasive. Depending on the nature and severity of the violation, administrative fines are categorized as follows:
- Violations of Article 5 (Prohibited AI Practices): An administrative fine of up to EUR 35,000,000 may be imposed. If the offender is an undertaking, the fine may reach up to 7% of its total worldwide annual turnover for the preceding financial year.
- Violations of Other Obligations under the Act: Non-compliance with other obligations set out in the Act may result in an administrative fine of up to EUR 15,000,000. Where the offender is an undertaking, the fine may amount to 3% of its total worldwide annual turnover for the preceding financial year.
- Provision of Incorrect, Incomplete, or Misleading Information to National Authorities: An administrative fine of up to EUR 7,500,000 may be imposed. If the offender is an undertaking, the fine may reach up to 1% of its total worldwide annual turnover for the preceding financial year.
When determining whether to impose an administrative fine and setting its amount, several factors will be considered, including the purpose of the AI system in question, the nature, seriousness, and duration of the infringement, the market share of the infringing entity, whether the entity has been subject to previous sanctions, the degree of cooperation with national authorities to mitigate adverse effects, whether the infringement was intentional or due to negligence; and how and when the infringement was reported to the competent authorities.
These elements are assessed to ensure that each case is addressed through a fair, proportionate, and effective enforcement process tailored to its specific circumstances.
Under the Act, Member States retain discretion to determine whether public authorities established within their jurisdiction are subject to administrative fines and, if so, to define the scope of such sanctions. Furthermore, the competent authority responsible for imposing fines — whether national courts or other designated bodies — depends on each Member State's legal framework. Regardless of the approach, however, enforcement must ensure equivalent effects across the EU.
The exercise of sanctioning powers remains subject to the procedural safeguards provided under EU and national law, particularly the rights to effective judicial remedies and fair trial guarantees. Additionally, Member States are required to submit annual reports to the European Commission, detailing the administrative fines imposed, legal proceedings initiated, and litigation outcomes under the Act.
IX. Entry into Force and Implementation Timeline
Act was published in the Official Journal of the European Union and entered into force on 1 August 2024. However, its full applicability has been structured as a gradual implementation process over a two-year period from the date of entry into force.
Key milestones in the implementation timeline are as follows:
Provisions regarding prohibited AI systems entered into force on 2 February 2025.
Rules applicable to General-Purpose AI (GPAI) systems became effective on 2 August 2025.
Provisions concerning administrative fines and sanctions for non-compliance also took effect on 2 August 2025.
By 2 February 2026, the European Commission is expected to publish a comprehensive set of guidelines outlining the practical implementation of the Act.
As of 20 August 2026, the general provisions of the Act — including those related to high-risk AI systems — will become fully applicable.
X. Conclusion
Act represents a comprehensive regulatory framework established by the European Union with the objective of setting global standards for the development and deployment of artificial intelligence systems. By prioritizing human rights and safety while simultaneously fostering innovation, the Act introduces a risk-based approach to classifying AI systems and sets forth specific obligations for each risk category.
Through its strict rules on high-risk AI systems and prohibited practices, the Act seeks to safeguard individual rights and uphold EU values. Furthermore, by providing for effective, proportionate, and dissuasive penalties in cases of non-compliance, the Act aims to ensure the efficient and uniform application of its provisions across the EU. Overall, the Act marks a significant step toward strengthening the European Union's global leadership in the regulation and governance of artificial intelligence technologies.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.