U

What is "High-Risk AI"? A Clear Guide to the EU AI Act

Posted on September 1, 2025 by AI Act Compass Team

An abstract visualization of AI risk assessment

The EU AI Act is a landmark piece of legislation that introduces a risk-based approach to regulating artificial intelligence. At the heart of this framework is the concept of "High-Risk AI"—systems that could have a significant negative impact on people's safety, livelihoods, or fundamental rights. Understanding if your AI system falls into this category is the most critical first step towards compliance.

Defining High-Risk AI: The Two Main Pathways

An AI system is generally considered high-risk if it meets one of two main criteria:

  1. AI as a Safety Component: The system is intended to be used as a safety component of a product, or is itself a product, that is already covered by existing EU harmonisation legislation listed in Annex I of the Act. This includes things like toys, vehicles, aviation, and medical devices. If the AI's failure could lead to a risk to health and safety, it will likely be deemed high-risk.
  2. Specific High-Risk Use Cases: The system falls into one of the specific use cases listed in Annex III of the Act. These are areas where AI is deemed to pose a significant threat to fundamental rights.
Curious about your system's risk level?

Our interactive questionnaire can help you quickly assess whether your AI system might be considered high-risk.

Take the High-Risk Assessment

A Closer Look at Annex III Use Cases

Annex III is crucial for most businesses. It lists several key domains where AI systems are presumed to be high-risk. These include:

  • Biometric identification and categorisation of natural persons.
  • Management and operation of critical infrastructure (e.g., water, gas, and electricity supply).
  • Education and vocational training, including systems that determine access to educational institutions or evaluate students.
  • Employment, workers management, and access to self-employment (e.g., CV-sorting software, AI for monitoring performance).
  • Access to and enjoyment of essential private and public services and benefits (e.g., credit scoring, systems that evaluate eligibility for public assistance).
  • Law enforcement, migration, asylum and border control management, and the administration of justice and democratic processes.

The "Significant Risk" Exception

Importantly, even if your system is listed in Annex III, it is NOT considered high-risk if it does not pose a "significant risk of harm to the health, safety or fundamental rights of natural persons". This provides a crucial exemption for simpler AI systems. However, the burden of proof is on you, the provider, to document why your system does not pose such a risk. Note that this exemption does not apply if the AI performs profiling of individuals.

What Are Your Obligations?

If your system is classified as high-risk, you will face a set of strict, ongoing obligations before and after it's placed on the market. Key requirements include:

  • Establishing a robust risk management system.
  • Ensuring high-quality data governance and management.
  • Maintaining detailed technical documentation.
  • Implementing automatic record-keeping (logging).
  • Providing a high degree of transparency and clear instructions for users.
  • Ensuring appropriate human oversight measures.
  • Meeting a high level of accuracy, robustness, and cybersecurity.

Navigating these requirements can be complex. Our operational checklist breaks down these obligations by their effective dates to help you plan your compliance journey.

Conclusion: Your Path to Compliance

Determining whether your AI system is high-risk is a foundational step in your EU AI Act compliance strategy. It dictates the level of scrutiny and the set of obligations you will need to meet. By carefully evaluating your system against Annex I and Annex III, and documenting your reasoning, you can build a solid foundation for regulatory readiness.