GPAI Models and the AI Act: What Providers Need to Know
Posted on September 1, 2025 by AI Act Compass Team
General-Purpose AI (GPAI) models, often called foundation models, are a cornerstone of the modern AI ecosystem. Their ability to perform a wide range of tasks makes them incredibly powerful, but also introduces unique regulatory challenges. The EU AI Act places specific, tiered obligations on the providers of these models to ensure transparency and accountability throughout the AI value chain.
What Defines a GPAI Model under the Act?
A GPAI model is defined as an AI model that "displays significant generality and is capable to competently perform a wide range of distinct tasks" and which "can be integrated into a variety of downstream systems or applications." Think of large language models (LLMs) or large-scale image generation models.
The key takeaway is that their flexibility requires a different approach from narrow AI systems. The obligations fall on the original provider of the model, not just the company that integrates it into a customer-facing product.
The compliance journey for GPAI models has its own timeline. Our operational checklist outlines when key obligations come into force.
Explore the GPAI TimelineCore Obligations for All GPAI Providers
If you place a GPAI model on the EU market, you must adhere to a set of core transparency obligations, which come into effect by mid-2026. These include:
- Technical Documentation: You must create and maintain extensive technical documentation. This should detail the model's training process, the data used, its testing and evaluation methodologies, and its known limitations. This documentation must be available to the AI Office and national authorities upon request.
- Information for Downstream Providers: You must provide clear information and documentation to the companies that integrate your model into their own systems (the 'downstream providers'). This helps them understand the model's capabilities and limitations, enabling them to build safe and compliant high-risk AI systems if applicable.
- Copyright Compliance Policy: You must establish and make publicly available a policy demonstrating how you comply with EU copyright law. This is particularly relevant to the data used to train your model. You need to provide a "sufficiently detailed summary" of the content used for training.
Heightened Rules for GPAI Models with "Systemic Risk"
The AI Act introduces a higher tier of regulation for GPAI models that are deemed to pose "systemic risk." A model is presumed to have systemic risk if the cumulative amount of computing power used for its training, measured in floating point operations (FLOPs), is greater than 10^25.
Providers of these high-impact models face additional, more stringent obligations:
- Performing standardized model evaluations.
- Assessing and mitigating possible systemic risks.
- Tracking, documenting, and reporting serious incidents.
- Ensuring a high level of cybersecurity protection.
Conclusion: Transparency is Non-Negotiable
For providers of GPAI models, the message from the EU AI Act is clear: transparency is not optional. Whether you are providing a small, open-source model or a massive foundation model with systemic risk, you have a duty to document its inner workings and communicate its capabilities clearly. By preparing now, you can ensure your models empower innovation across the EU while upholding the principles of safety and accountability.