ARTICLE
14 October 2025

The European Commission Publishes Draft Guidance For Serious AI Incidents Under The EU AI Act

AO
A&O Shearman

Contributor

A&O Shearman was formed in 2024 via the merger of two historic firms, Allen & Overy and Shearman & Sterling. With nearly 4,000 lawyers globally, we are equally fluent in English law, U.S. law and the laws of the world’s most dynamic markets. This combination creates a new kind of law firm, one built to achieve unparalleled outcomes for our clients on their most complex, multijurisdictional matters – everywhere in the world. A firm that advises at the forefront of the forces changing the current of global business and that is unrivalled in its global strength. Our clients benefit from the collective experience of teams who work with many of the world’s most influential companies and institutions, and have a history of precedent-setting innovations. Together our lawyers advise more than a third of NYSE-listed businesses, a fifth of the NASDAQ and a notable proportion of the London Stock Exchange, the Euronext, Euronext Paris and the Tokyo and Hong Kong Stock Exchanges.
On September 26 2025, the European Commission published the ‘Draft Guidance Article 73 AI Act – Incident Reporting' (the Draft Guidance).
European Union Technology
Catherine Di Lorenzo’s articles from A&O Shearman are most popular:
  • within Technology topic(s)
  • with Inhouse Counsel
  • in United States
  • with readers working within the Accounting & Consultancy, Banking & Credit and Insurance industries

On September 26 2025, the European Commission published the 'Draft Guidance Article 73 AI Act – Incident Reporting' (the Draft Guidance). The Draft Guidance is intended to help providers and deployers of High-Risk AI Systems (as defined under the EU Artificial Intelligence Act (the AI Act)) to comply with their post-market monitoring obligations to report serious incidents and widespread infringements of High-Risk AI Systems to national authorities (under Article 73 of the AI Act).

The Draft Guidance aims to clarify definitions and reporting obligations by providing explanations (e.g. the nature of obligations and what, when and how to report) and giving practical examples.

Amongst other things, the Draft Guidance breaks down the components of key definitions such as a serious incident (with respect to which reporting obligations apply), and considers the meaning of, and circumstances when, an incident or malfunction of the AI system directly or indirectly causes (whether used in accordance with its intended purposes or through reasonably foreseeable misuse) death or serious harms.

For example, the Draft Guidance clarifies that the serious harm of infringement of fundamental rights includes the rights protected by the EU Charter of Fundamental Rights. However, there is a focus on serious incidents and therefore, only those infringements that significantly interfere with the EU Charter protected rights on a large scale are reportable. Examples include discriminatory AI in recruitment, credit scoring that excludes certain categories of person such as those having names from certain regions or living in certain locations, or biometric identification that frequently misidentifies inpiduals from certain backgrounds.

The Draft Guidance also clarifies that the same event may trigger reporting obligations under different EU legislation but it goes on to detail how provision is made in the AI Act to mitigate the risk of overlapping obligations and reporting fatigue, giving examples of scenarios where reporting under the AI Act may be required. For instance, under the Critical Entities Resilience Directive, entities in essential sectors (including energy, water, digital infrastructure) must report incidents that disrupt essential services within 24 hours of the incident. Under the AI Act, for these sectors, only incidents involving fundamental rights violations require additional reporting (e.g., where an AI system managing power supply discriminates against low-income areas, this must be reported under the AI Act). Separately, under the Digital Operational Resilience Act, financial entities must report major ICT incidents and cyber threats using standardised templates. For AI systems in financial services, only incidents involving fundamental rights trigger additional AI Act reporting (e.g., discriminatory loan approvals or privacy violations).

The Draft Guidance emphasises the EU's intention to align AI incident monitoring with international standards, including the Organisation for Economic Co-Operation and Development's AI Incidents Monitor and Common Reporting Framework.

The Draft Guidance does not deal with general-purpose AI models with systemic risk and associated reporting duties (these obligations fall under Article 55 of the AI Act).

The European Commission also published a template 'Incident report for serious incidents under the AI Act (High-Risk AI Systems)' alongside the Draft Guidance.

The Draft Guidance and Draft Reporting Template are available here. Stakeholder feedback may be submitted here until 7 November 2025.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More