The idea of existential risk of AI is not based 
in science or disaster management.


 

Existential Risk: More about interests than safety

How to distinguish unscientific from scientific claims

The concept of Artificial Intelligence Existential Risk (AI X-Risk, see the overview in Kasirzadeh, 2025) is not grounded in a scientifically sound risk assessment but rather a philosophical construct and movement. It is based on the notion that a substantial progression in developing Artificial General Intelligence (AGI) - which has never been defined by scientific consensus - could lead to a global catastrophe or even the outright extinction of humanity. The concept is inherently speculative and focuses on a hypothetical future, not on current AI technologies.

The lack of scientific rigor stems from the fact that Existential Risk describes a hazard that is purely hypothetical. Risk is formally defined as the combination of the probability of an occurrence of harm and the severity of that harm. A scenario with an undefined and unquantifiable probability of occurrence renders a formal risk assessment and therefore the definition of a risk impossible. The systemic basis for a global catastrophe or human extinction is left entirely unexplained. While such concepts can be useful for precautionary research, they are not for operative risk or disaster management. A real-world analysis is required but is consistently avoided.

This deficit in scientific basis does not equate to demanding perfect predictability. Rather, professional risk and disaster management rely on definable probability pathways and testable theses to establish mitigating strategies. There are no arguments why this is not feasible even for emerging technologies.

From a disaster management perspective, this AI X-Risk movement contradicts established science and the latest research. Earthquakes are hard to predict, but earthquake-induced disasters can be prevented. The AI X-Risk movement, however, keeps the hazard vague and avoids a detailed examination of how a disaster could unfold. This indicates a lack of genuine interest in managing a disaster. Instead, the underlying interest appears to be an undisclosed one.

Unfortunately, these claims have actually made it into the EU AI Act. Recital 110 EU AI Act includes "risks from models of making copies of themselves or ‘self-replicating’ or training other models". This indicates that the regulation is addressing a hazard based on a hypothetical and unverified idea, rather than on a scientific basis. 

Recommendations

When faced with fear-inducing narratives about hypothetical AI dangers, the professional response is not anxiety, but analysis. Question the (un-)scientific basis, identify the underlying interests, and separate speculation from fact. A sober, critical (risk) assessment is the most effective antidote to fear.

Author

Claudia Otto

As a lawyer and researcher, Claudia specializes in AI safety, security, and risk assessment under the EU AI Act, the subject of her Master's thesis in Security and Disaster Management (MBA).

Need guidance on risk and compliance?

Cite this briefing

Otto, AI Risk Literacy, Existential Risk: More about interests than safety, September 2025

Wir benötigen Ihre Zustimmung zum Laden der Übersetzungen

Wir nutzen einen Drittanbieter-Service, um den Inhalt der Website zu übersetzen, der möglicherweise Daten über Ihre Aktivitäten sammelt. Bitte überprüfen Sie die Details in der Datenschutzerklärung und akzeptieren Sie den Dienst, um die Übersetzungen zu sehen.