ARTICLE
13 June 2025

European Commission Launches Stakeholder Consultation On The EU AI Act's Rules For High-Risk AI Systems

SJ
Steptoe LLP

Contributor

In more than 100 years of practice, Steptoe has earned an international reputation for vigorous representation of clients before governmental agencies, successful advocacy in litigation and arbitration, and creative and practical advice in structuring business transactions. Steptoe has more than 500 lawyers and professional staff across the US, Europe and Asia.
On June 6, 2025, the European Commission launched a public consultation to gather input on the implementation of the EU Artificial Intelligence (AI) Act provisions and obligations related to high-risk AI systems.
European Union Technology

On June 6, 2025, the European Commission launched a public consultation to gather input on the implementation of the EU Artificial Intelligence (AI) Act provisions and obligations related to high-risk AI systems. This public consultation aims to inform the upcoming guidelines that the European Commission is drafting. More specifically, the European Commission seeks to collect input from stakeholders on practical examples of AI systems and issues to be clarified in these guidelines regarding the classification of high-risk AI systems, the related requirements and obligations, as well as responsibilities along the AI value chain.

All interested stakeholders have until July 18, 2025 to share their input by responding to the questionnaire available here.

The questionnaire for the consultation is divided into five sections:

  • Sections 1 and 2 focus on the classification rules for high-risk AI systems. The European Commission is particularly seeking input on how the notion of "safety component" should be interpreted, examples of AI systems that can be considered themselves as products covered by EU laws listed under Annex I, and the exemptions to the classification of high-risk AI systems under Annex III.
  • Section 3 covers the notion of the "intended purpose" of an AI system and how this notion should be clarified.
  • Section 4 focuses on the obligations applicable to Providers and Deployers of high-risk AI systems, as well as the concept of "substantial modification" and the subsequent risk of requalification as Provider of high-risk AI systems for other actors in the AI value chain.
  • Section 5 seeks input on potential updates to the lists of high-risk use cases set out in Annex III of the EU AI Act and prohibited AI practices.

Participation in this public consultation is a unique opportunity to shape the EU AI regulatory framework. Businesses are therefore highly encouraged to take part, particularly those active in the Life Sciences, Financial Services, Transportation, Critical Infrastructure, and Technology sectors. Steptoe is well-prepared to assist in crafting an effective response.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More