ARTICLE
17 November 2025

EDPS Issues Comprehensive AI Risk Management Guidance For EU Institutions

JW
Jones Walker

Contributor

At Jones Walker, we look beyond today’s challenges and focus on the opportunities of the future. Since our founding in May 1937 by Joseph Merrick Jones, Sr., and Tulane Law School graduates William B. Dreux and A.J. Waechter, we have consistently asked ourselves a simple question: What can we do to help our clients succeed, today and tomorrow?
The European Data Protection Supervisor (EDPS) has issued detailed new guidance on managing data protection risks in artificial intelligence (AI) systems used by EU institutions...
Worldwide Privacy
Jason Loring’s articles from Jones Walker are most popular:
  • within Privacy topic(s)
  • with readers working within the Technology industries
Jones Walker are most popular:
  • within Privacy, Insurance and Law Practice Management topic(s)

The European Data Protection Supervisor (EDPS) has issued detailed new guidance on managing data protection risks in artificial intelligence (AI) systems used by EU institutions, bodies, offices, and agencies. Released on November 11, 2025, the 55-page framework provides structured, ISO-based methods for identifying and mitigating AI-related risks to EU fundamental rights.

Key Framework Elements

The guidance adopts the ISO 31000:2018 risk management methodology, translating data protection principles into technical compliance measures across five core areas:

  • Fairness: Addressing algorithmic and training-data bias, including overfitting and interpretive bias.
  • Accuracy: Ensuring statistical validity, preventing data drift, and reducing inaccurate personal data outputs.
  • Data Minimization: Avoiding indiscriminate collection and storage of personal data.
  • Security: Mitigating risks such as AI output disclosure, data leakage, and API exposure.
  • Data Subject Rights: Enabling effective mechanisms for access, rectification, and erasure.

Interpretability and Explainability: Compliance Prerequisites

A central feature of the EDPS guidance is its emphasis on interpretability and explainability as sine qua non conditions for compliance with the EU General Data Protection Regulation (GDPR).

  • Interpretability concerns understanding how an AI model operates (its internal logic and linkages between input and output); and
  • Explainability focuses on why an AI system produces particular results, enabling those outcomes to be communicated meaningfully to end users.

Together, they form the foundation for transparent, auditable, and trustworthy AI decision-making.

Lifecycle and Practical Structure

The guidance maps risk across the entire AI lifecycle, from inception through deployment and retirement, and distinguishes between development and procurement scenarios. Section 4 introduces interpretability and explainability as baselines for risk management, while Section 5 pairs each identified risk with specific technical countermeasures.

Three detailed annexes expand on implementation:

  1. Metrics for AI evaluation;
  2. Visual risk overview; and
  3. Phase-specific checklists for both development and procurement contexts.

Scope and Limitations

The EDPS issues this document in its supervisory capacity, not as a market-surveillance authority under the EU AI Act. It explicitly states that the proposed controls are "by no means exhaustive" and do not replace institution-specific risk assessments. Each controller remains responsible for conducting comprehensive compliance evaluations tailored to their own processing activities

Why This Matters

This guidance fills a crucial operational gap between high-level regulatory principles and practical AI deployment. For US organizations partnering with or supplying AI systems to EU institutions, or developing governance programs inspired by European models, it provides:

  • A tested, ISO-aligned methodology for systematic risk identification;
  • Concrete technical measures that extend beyond legal abstractions;
  • Lifecycle-based compliance checkpoints; and
  • Practical integration of interpretability and explainability requirements.

Notably, the framework complements US initiatives such as the NIST AI Risk Management Framework and OMB M-24-10, offering a parallel structure for organizations seeking cross-jurisdictional AI accountability.

Access and Next Steps

The full document, Guidance for Risk Management of Artificial Intelligence Systems, is available on the EDPS website.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More