ARTICLE
9 May 2025

Clarifying Obligations Under The AI Act: A Preview At The Commission's Targeted Consultation On General-Purpose AI Models

In view of the rules for providers of general-purpose AI ("GPAI") models entering into force on the 2nd of August 2025, the European Artificial Intelligence Office ("AI Office") has launched a targeted consultation...
Malta Technology

Obligations under the AI Act

In view of the rules for providers of general-purpose AI ("GPAI") models entering into force on the 2nd of August 2025, the European Artificial Intelligence Office ("AI Office") has launched a targeted consultation to support the creation of the European Commission Guidelines that will aim to clarify the scope of obligations for providers of GPAI models. We have previously seen this style of consultation and subsequent issuance of guidelines with last year's consultation on AI Act prohibitions and AI system definition.

GPAI models are versatile AI systems trained to perform a wide variety of tasks. Based on the definition available under the AI Act, their 'generality' is in fact, what distinguishes them from other models.

This definition of course raised some concerns, especially with smaller entities, with regard to the ambiguity; given that 'generality' is not exactly a checkable metric. Given that GPAI models form the backbone of many downstream applications, this situation raised distinct regulatory concerns given the above-mentioned date of application of the respective rules.

Adding to this complexity, the concept of "provider", defined under Article (3) was also often flagged as ambiguous. In practice, this raises questions about who precisely bears the obligations, especially when models are modified, fine-tuned, or integrated by other actors.

In addition, the draft guidelines aim to offer much-needed clarification on several fronts.

I) Definition

As noted above, the definition of GPAI models under the AI Act hinges on the concepts of significant generality and capability to competently perform a wide range of distinct tasks. However, this remains somewhat abstract without concrete benchmarks or quantifiable metrics. To address this, the draft guidelines propose a pragmatic proxy: measuring the training compute, specifically, the computational resources used during training, expressed in Floating Point Operations ("FLOPS").

II) Who qualifies as provider?

The draft guidelines aim to further clarify who is considered the provider of a GPAI model under Article 3(3) of the AI Act. This includes not only the original developer but also other entities that significantly modify the model, referred to as "downstream modifiers".

Under the current draft guidelines, if a downstream modifier applies modifications consuming more than one-third of the original model's training compute, they are presumed to become the provider of a new GPAI model and must comply with the obligations applicable to providers. Importantly, this presumption can extend to systemic risk obligations if the modified model meets or exceeds the systemic risk compute thresholds set in the AI Act.

III) The placing of models on the market

The draft guidelines explain what counts as placing a GPAI model on the market, which is the trigger for regulatory obligations. This includes making models available through APIs, software libraries, downloadable files, or cloud services, whether for free or for payment.

IV) Estimating compute

The draft guidelines offer two main methods for estimating training compute:

1622468.jpg

  1. A hardware-based approach, calculating compute based on GPU usage, training duration, and utilisation rates;
  2. An architecture-based approach, calculating compute using model parameters and training examples.

Both methods are designed to be flexible and practical and are ought to allow providers to choose the approach that best suits their operations in a pragmatic manner.

Providers of GPAI models placed on the market before 2 August 2025 will have until 2 August 2027 to achieve compliance. For obligations that cannot be applied retroactively, such as copyright compliance for past training data, providers must transparently disclose and justify these limitations.

Stakeholders are therefore encouraged to contribute their insights on the Commission's dedicated portal by 22 May 2025, ensuring that the final guidelines reflect a balanced and workable consensus for Europe's evolving AI landscape.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More