There is no systemic risk. 
Every risk is systemic.


 

The term "systemic risk" does not mean what you think it does.

Why risk is always inherently systemic

Formally, risk is always the result of a combination of at least two factors. This established principle, defined by the interplay between the probability (P) of an occurrence of harm and its severity (S), is challenged by the popular term "systemic risk". The very interaction between these two, rarely monocausal, risk factors implies a systemic dynamic. This dynamic can be deemed systemic because the underlying factors determining probability and severity, e.g. the nature of the hazard, the vulnerability of the affected entity, and its exposure, are themselves complex, interdependent systems or a system's trait. Hence, it implies that "systemic risk" is a tautology.

To prove the tautology within "systemic risk", the term "system" has to be clarified. The adjective "systemic" is frequently used in policy and regulation without a corresponding definition for the noun "system" on which it is based. In the absence of a formal (legal) definition, a foundational concept must be applied that aligns with the term's intended analytical function.

Systems analysis could provide the necessary clarity. Meadows defined a system as "an interconnected set of elements that is coherently organised in a way that achieves something" (Meadows, 2008). She pointed out that, "a system must consist of three kinds of things: elements, interconnections, and a function or purpose" (Meadows, 2008). According to this definition, a cell is a system, the human body is a system, a football team is a system. And each system can be part of another system, for example, the house as a part of a town, which is part of a county, which is part of a country, and so on.

Emphasis must be placed on "interconnections", as they are what distinguish a system from a mere collection of parts. A risk is precisely such an interplay of factors, e.g., the hazard, the entity's vulnerability, and its exposure, which are themselves complex systems or a system's trait. It is the interconnections between these systemic factors that constitute risk. Therefore, risk is not merely related to a system; it is a systemic dynamic by definition. The term "systemic risk" is therefore a tautology.

EU AI Act: "Systemic risk" is a name for a specific hazard

The EU AI Act defines "systemic risk" as

"a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain" - Art. 3 (65) EU AI Act

This definition reveals that lawmakers focused on the "high-impact capabilities" of a hazard - a general-purpose AI model itself - and its potential for large-scale societal harm. This focus on the hazard is proven by Art. 51 EU AI Act, which details a procedure of hazard identification, not risk assessment. It mandates that a model is to be qualified as a hazard with "high-impact capabilities", named "systemic risk", simply by verifying if its capabilities meet certain hazardous criteria.

Therefore, Recital 110 EU AI Act provides a detailed catalogue not of risks, but of hazards and threats with potential for large-scale societal harm. While the recital discusses these hazards and threats in the context of general-purpose AI models, it must be noted that they are not exclusive to them.

Reality-checking the "systemic risks" in Recital 110 EU AI Act

Not all hazards and threats (collectively: dangers) listed under the umbrella of "systemic risks" in Recital 110 EU AI Act are of equal credibility. For a sound assessment, they must be divided into two distinct groups: operative realities, which AI accelerates, and hypothetical dangers, which AI currently cannot create on its own. This distinction is based on AI's reliance on existing human knowledge and its inability to overcome fundamental physical and scientific barriers.

Dangers based on operative realities

This group of dangers is already relevant today. AI technologies (abbr.: AI) do not create new dangers but can serve as a powerful tool to accelerate and scale well-known dangers.

Offensive cyber capabilities

AI automates the exploitation of existing, human-made software vulnerabilities. The advantage is the speed and scale of the attack, not the creation of a new type of flaw or attack.

Disruption of critical sectors 

The underlying vulnerability is the digitalisation of infrastructure. The danger arises when AI is given control over these systems, leading to a physical disruption of an existing system.

Disinformation & similar dangers

Lies and propaganda are not new. However, AI enables the mass production of convincing fakes, which scales the erosion of trust, e.g., in established democratic and social systems.

Hypothetical dangers

This group of dangers is highly speculative, as it would require AI to overcome fundamental scientific and physical barriers. Making these abstract potentials a concrete reality would require substantial human effort and scientific breakthroughs that do not yet exist.

CBRN (e.g., Bioweapons)

AI draws from human knowledge and cannot solve the fundamental "riddle of nature". Every AI output is a hypothesis that needs to be tested. A mere hypothesis fails against the barriers of the real world (e.g., the incalculability of living organisms as a biological barrier).

Self-replication & propagation

These require an AI to first develop autonomous, self-directed intent. Furthermore, it would need to overcome fundamental scientific barriers to propagate across virtual and physical systems. This is different from exploiting an operative reality, i.e. vulnerability.

Loss of control or, e.g., misalignment

The scientific hurdles for creating a so-called AGI or superintelligence are currently unknown, let alone solved. Scenarios based on the assumption of an AI with autonomous, human-like or human-exceeding goals are therefore purely hypothetical.

Recommendations

Don't get confused by names or hypotheses.

As an operator of AI systems, you also have to consider relevant hazards and threats listed as "systemic risks" in Recital 110 EU AI Act. This obligation might not follow from the EU AI Act directly, but from other legislation.

As a provider of general-purpose AI models, don't forget about the risk assessment. Under the EU AI Act, the qualification of a general-purpose AI model "with systemic risk" is a process of hazard identification, but this initial step does not absolve the provider from the subsequent obligation to conduct a formal risk assessment. The proof for a mandatory full procedure is found in the specific obligations of Art. 55 (1) (a) and (b) EU AI Act ("identify, assess, mitigate"), which apply only after a model is already qualified as having a "hazardous character".

As any operator, it is crucial to distinguish between the two types of dangers discussed. Your risk assessment and management resources as well as compliance efforts should be focused on the operative realities. The hypothetical dangers, such as those discussed in the context of Existential Risk, should be treated as a matter of opinion, precautionary research and horizon-scanning. This distinction allows you to ground your risk management in a scientifically sound basis while acknowledging the full scope of the regulatory discourse.

Author

Claudia Otto

As a lawyer and researcher, Claudia specializes in AI safety, security, and risk assessment under the EU AI Act, the subject of her Master's thesis in Security and Disaster Management (MBA).

Need guidance on risk and compliance?

References

Donella H. Meadows, Thinking in Systems - A Primer, Sustainability Institute, 2008

Cite this briefing

Otto, AI Risk Literacy, The term "systemic risk" does not mean what you think it does, September 2025

Wir benötigen Ihre Zustimmung zum Laden der Übersetzungen

Wir nutzen einen Drittanbieter-Service, um den Inhalt der Website zu übersetzen, der möglicherweise Daten über Ihre Aktivitäten sammelt. Bitte überprüfen Sie die Details in der Datenschutzerklärung und akzeptieren Sie den Dienst, um die Übersetzungen zu sehen.