Formally, risk is always the result of a combination of at least two factors.


 

The misunderstood: risk

The concept of risk

Risk is formally defined as the combination of the probability of an occurrence of harm and the severity of that harm. While its roots lie in actuarial science (Siedschlag/Stangl, 2020), the term's practical application has led to various versions of this definition, though they all carry the same core idea (ibid for disaster management). For practitioners, this means that "risk" can have a specific, contextual meaning, and relying on everyday language may lead to confusion or even liability.

This confusion is evident in academic discourse and AI governance approaches on a global scale. Risk has become so detached from its formal meaning that it is often used interchangeably with other terms, even in professional settings.

The components of risk

To determine risk, its two main components have to be assessed: probability (P) and severity (S). Only the concrete interplay of both factors allows for an understanding of risk. Without this assessment, one is likely referring to a hazard.

In simple terms, probability (P) is the quantifiable chance that a harm will occur. For a precise assessment, however, the distinction between probability and likelihood is crucial. These terms are not synonyms:

  • Probability refers to the actual, often unknown chance of a specific outcome.
  • Likelihood is the tool used to estimate the probability. It assesses the plausibility of a hypothesis (e.g., "The chance of an error is 10%") based on available data and empirical evidence.

Practitioners analyze likelihood in order to make a well-founded statement about the probability. Using them interchangeably can lead to flawed assessments and negative outcomes.

Severity (S) describes the magnitude of the harm, should it occur. It is determined by several factors, for example:

  • Scale: How many people or systems are affected?
  • Intensity: How severe is the harm in a single case (e.g., minor vs. permanent health damage)?
  • Scope: What are the medium- to long-term consequences (e.g., loss of trust or cascading effects)?

A proper risk assessment therefore always considers the interplay of P and S while accounting for the uncertainty of the data.

The EU AI Act's definition

According to the formal definition in Art. 3 (2) EU AI Act, risk means "the combination of the probability of an occurrence of harm and the severity of that harm". 

However, throughout the EU AI Act, the use of the term "risk" does not consistently reflect this definition. In most cases, the Act uses "risk" to refer to a hazard, which is not defined in the EU AI Act. This is particularly true for the term "systemic risk" which is a name for a hazard with a potential for large-scale societal harm.

While the definition is technically correct and resonates with related legislative acts like the Medical Device Regulation, its practical application with regard to AI shows limitations. This reveals a gap between the expectations of the EU AI Act and the possibilities of this definition.

The misleading term "high-risk AI system"

The term "high-risk AI systems" is one of the most misleading labels in the AI Act. This designation does not mean that an AI system's actual risk is high. Instead, it is a simplification used by the legislator to group certain AI systems that are presumed to be a source of potential harm and therefore require special safeguards. This grouping does not replace the obligation for a risk assessment; on the contrary, it is what triggers it.

In conclusion, the label "high-risk AI system" - that even lacks a definition - is not the result of a risk assessment but a pre-assessment by the legislator. The real, system-specific risk must still be determined by the operator through a professional risk assessment.

Most importantly, the obligation to conduct an AI risk assessment is not limited to this special group of AI systems. The AI risk assessment requirement can be derived from different articles of the EU AI Act, but also follows from other legislation. The Act's design and intense focus on this newly created "high-risk" category, however, often obscures the bigger picture. This leads to the common misconception that the EU AI Act is a burdensome new regulation, when in fact, most of the underlying obligations already exist under other legislation.

The great confusion caused by the risk pyramid

The so-called risk pyramid, used by the European Commission to illustrate the regulatory motivation behind the "risk-based" AI Act, is a common source of confusion. While it appears to be a practical tool, it is, in fact, an expression of the Commission's and legislator's own, generalized, and anticipatory risk assessment (cf. Art. 7 EU AI Act). It does not guide operators on how to assess the specific risks of their AI systems (or general-purpose AI models). Its purpose is illustration-only.

Originally, the pyramid was intended to highlight the EU AI Act's pro-innovation focus (cf. Otto, 2021). The visual metaphor was meant to show that the vast majority of AI systems, forming the pyramid's wide base, would face no additional requirements under the AI Act. In contrast, only a minority of systems would face light (transparency) or strict requirements, with an even smaller group being outright prohibited. The intended message was one of targeted, limited regulation to encourage innovation.

What the EU Commission's communication altogether fails to convey is that the EU AI Act requires three different risk assessments: that of the legislator (what the risk pyramid illustrates), that of the market surveillance authority, and that of the operator. The last one is required to conduct a full, detailed risk assessment. The risk pyramid's heuristic therefore creates a false sense of regulatory literacy.

Recommendations

Please be aware of the fact that standards like ISO 31073:2022, which define "risk" as "effect of uncertainty on objectives", are non-binding and focused on the entity, not legal compliance. This must not lead to confusion with the legal definition or obligations.

The term "risk" is often used ambiguously. If you cannot understand precisely what another party means by it, actively ask for their definition. Do not build your future on vague language; instead, seek out professional literature and experts who communicate with clarity and precision.

When writing or speaking about risk in relation to AI technologies, define the term and use it coherently. Refrain from using pyramid illustrations. Don't add to the confusion.

The formal risk definition often falls short of capturing the unique characteristics of AI technologies. To meet the protective aims of the EU AI Act, a risk assessment must therefore be based on an adapted, yet legally compliant, framework. Developing such a robust and defensible assessment is a complex task where expert collaboration can be instrumental in ensuring full compliance.

Author

Claudia Otto

As a lawyer and researcher, Claudia specializes in AI safety, security, and risk assessment under the EU AI Act, the subject of her Master's thesis in Security and Disaster Management (MBA).

Need guidance on risk and compliance?

References

Otto, Die große Chance von Weltklasse-KI, Ri 2021, p.9-10

Siedschlag/Stangl, Katastrophenmanagement, 2020, p. 20

Cite this briefing

Otto, AI Risk Literacy, Risk: the misunderstood, September 2025

Wir benötigen Ihre Zustimmung zum Laden der Übersetzungen

Wir nutzen einen Drittanbieter-Service, um den Inhalt der Website zu übersetzen, der möglicherweise Daten über Ihre Aktivitäten sammelt. Bitte überprüfen Sie die Details in der Datenschutzerklärung und akzeptieren Sie den Dienst, um die Übersetzungen zu sehen.