General-Purpose AI (GPAI) Models & The EU AI Act: How Modifications Trigger Legal Impact

Author: Xavier Valero

Dec 09, 2025 Digital & Product Solutions / Digital Trust / Artificial Intelligence
Adapting general-purpose models like GPT, Gemini, or Llama through fine-tuning or domain-specific training has become a common route to innovation. Yet, with the EU AI Act entering into force, these modifications may bring new legal responsibilities. Wondering which changes to a General Purpose AI (GPAI) model trigger obligations under the EU AI Act ? Let’s explore in this article the key obligations and provide our perspective on the latest EU Guidelines to effectively prepare strategically.

GPAI Models and the EU GPAI Code of Practice

On July 10th, 2025, the EU introduced the General Purpose AI Code of Practice, designed to help providers of GPAI models comply with its specific provisions of the EU AI Act, which officially came into effect on August 2nd, 2025.

But what exactly are GPAI models?

GPAI models are built to perform a broad range of tasks across different applications. Unlike narrow AI systems, they are not limited to a single function, but are prepared to work in any general task, such as writing legal summaries drafting emails, generating code or even reasoning through complex questions instead they can write legal summaries, draft emails to generate code or even reasoning through complex questions. A model is classified as a GPAI if its training required at least 10²³ Floating Point Operations (FLOPs).
The EU also identifies a special category known as systemic-risk GPAI models. These are significantly larger models with potential risks to health, safety, fundamental rights, or society at large. They are classified as systemic-risk if their training consumed ≥ 10²⁵ FLOPs or if they meet other qualitative criteria such as widespread use in high-impact sectors or the capacity to influence democratic processes.

Legal Implications Beyond GPAI Models Under the EU AI Act

While only a few AI models meet the systemic-risk criteria, a larger group qualifies as GPAI models. However, many AI solutions available today are not built from scratch, rather they are instead based on GPAI foundation models and customized for specific use cases.
To provide clarity on responsibilities in these scenarios, the European Commission released guidelines on July 18th, 2025, shortly after introduction of the Code of Practice, to provide practical directions for companies working with GPAI models.
A key situation here is the common industry practice of developing new AI systems by modifying pre-trained GPAI models. This typically involves techniques such as fine-tuning, distillation, or Retrieval-Augmented Generation (RAG). To do so, organizations often inject domain-specific data, adjust parameters, or extend model capabilities to create tailored models according to their needs.
But what are the legal implications of these modifications? The EU GPAI Code of Practice establishes two distinct roles:
  • The AI Developer: it creates and releases the foundational GPAI model.
  • The Downstream Modifier: it adapts or fine-tunes the base model into a new system.
In certain cases, downstream modifiers may be legally recognized as providers of a GPAI model under the EU AI Act, carrying the same obligations as the original developers. This applies when their modifications are considered as “substantial.”

What Counts as a Substantial Modification?

The guidelines explain that not all changes are considered as substantial, it aligns with broader EU product law: a modification is considered substantial when the resulting system is, for regulatory purposes, a new model.
Examples of Changes that May Lead to Reclassification Include:
  • Adding extensive new training data (e.g., domain-specific datasets).
  • Changing the architecture or training objectives.
  • Applying instruction tuning that enables capabilities not originally possible.
To Ensure transparency, the Guidelines Introduce a Technical Threshold:
These numbers serve as references rather than strict rules. Both qualitative changes (in capabilities and scope) and quantitative thresholds (in compute used) determine whether a downstream modifier becomes a GPAI provider under the law.
On the other hand, lighter interventions such as prompt engineering, few-shot learning, or narrow fine-tuning are generally not considered substantial, meaning the deployer is not treated as a GPAI provider.

What if You Become a Downstream Modifier with Provider Obligations?

If your modifications are considered substantial, the EU AI Act enforces a series of obligations:
Article 53 – Obligations for GPAI Model Providers:
  • Prepare technical documentation describing design, capabilities, and limitations.
  • Publish a model card summarizing intended use, performance, and constraints.
  • Provide a summary of training data, including governance practices.
  • Ensure copyright compliance with EU law.
  • Offer clear deployment instructions to downstream users.
Article 55 – Additional Obligations for Systemic-Risk GPAI Providers:
The EU GPAI Guidelines are not just technical details, they play an essential role in defining legal accountability in the era of foundation models. For organizations adapting or fine-tuning GPAI systems, understanding when your modifications elevate your role to that of a provider is key.
If your organization is fine-tuning or adapting open models, now is the time to prepare for these regulatory obligations. Developing a compliance strategy early not only mitigates risk but also strengthens trust in your AI solutions . At DEKRA, we safeguard AI technology throughout its entire lifecycle – from development and testing to deployment and certification. With our holistic approach, we ensure that AI systems remain secure and future-ready.
Let’s work together to strengthen trust and accelerate AI innovation!