On September 26, 2025, the European Commission (EC) published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. For organizations developing or deploying AI systems that may fall within the Act's high-risk AI system scope, understanding these new reporting obligations is essential for compliance planning.
Key Takeaways
- The Commission published a draft incident reporting template and guidance document on September 26, 2025.
- Providers of high-risk AI systems will be required to report "serious incidents" to national authorities.
- Reporting timelines vary from two (2) to fifteen (15) days depending on the severity and type of incident.
- Public consultation is open until November 7, 2025.
Understanding the Incident Reporting Framework
Article 73 of the EU AI Act establishes a tiered reporting system for serious incidents involving high-risk AI systems. While these requirements will not take effect until August 2026, the newly released draft guidance offers valuable insights into the Commission's expectations.
The reporting framework serves multiple purposes: creating an early warning system for harmful patterns, establishing clear accountability for providers and users, enabling timely corrective measures, and fostering transparency to build public trust in AI technologies.
What Qualifies as a "Serious Incident"?
Under Article 3(49) of the Act, a serious incident occurs when an AI system incident or malfunction directly or indirectly leads to:
- Death of a person, or serious harm to a person's health;
- Serious and irreversible disruption of critical infrastructure;
- Infringement of fundamental rights obligations under EU law; or
- Serious harm to property or the environment.
What is particularly important in the draft guidance is its emphasis on both direct and indirect causation. An AI system providing incorrect medical analysis that leads to patient harm through subsequent physician decisions would qualify as an indirect serious incident. This means organizations must account for downstream effects in their risk management frameworks.
Intersection with Existing Reporting Regimes
For clients managing multiple compliance frameworks, the guidance provides welcome clarification on overlapping reporting obligations. High-risk AI systems already subject to equivalent reporting obligations under other EU laws (such as NIS2, DORA or CER) generally need only report fundamental rights violations under the AI Act.
This reflects the Commission's attempt to minimize duplicative reporting burdens, though the practical implementation still requires careful cross-functional coordination between AI governance, legal, and compliance teams.
Practical Implications for Organizations
Organizations should begin mapping their AI systems against the high-risk criteria and preparing internal processes for incident detection, investigation, and reporting. Key considerations include:
- Establishing clear incident response protocols;
- Implementing monitoring systems to detect potential serious incidents;
- Developing investigation procedures that preserve evidence;
- Creating cross-functional teams to manage reporting obligations; and
- Updating risk assessments to account for serious incident scenarios.
Next Steps
I encourage clients to participate in the public consultation, which remains open until November 7, 2025. The Commission is particularly seeking feedback and examples regarding the interplay with other reporting regimes.
Organizations should also begin reviewing their AI governance frameworks to ensure they can effectively implement these reporting requirements when they become applicable in August 2026.
For assistance with EU AI Act compliance planning, incident response frameworks or submitting feedback to the consultation, please contact any member of Jones Walker's Privacy, Data Strategy and AI Practice.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.