There is no AI literacy without
AI risk literacy.

What is AI risk literacy?
AI risk literacy means
- the knowledge and understanding of risk as a concept and of its constituent factors,
- the skills to recognise these factors and their interplay, as well as
- the skills to apply the relevant methodologies to assess and manage risk in practice.
A hub for insight and analysis
This platform is structured into distinct areas to provide a comprehensive and transparent resource for building your AI risk literacy.
Briefings
The Briefings section offers foundational and key information, including analyses of current developments. It is intended to provide AI risk literacy through expert contextualisation.
Library
The Library offers a curated collection of publications that contribute to AI risk literacy. Recommended contributions adhere to the principles of good scientific practice.
Partner perspectives
The Partner perspectives section offers companies a platform to share their views and findings. All contributions in this section are clearly marked as sponsored content.
Discourse
The Discourse section discusses and analyses new publications on AI risk, adhering to the principles of good scientific practice. It serves to foster critical discourse.
KI-Risikokompetenz
KI-Risikokompetenz is the German term for AI risk literacy. This section serves as the hub for German-language content, particularly publications from our library.
Guiding concepts
AI risk literacy is based on a shared understanding and clear guiding concepts. This platform provides knowledge on, among other things, risk, hazard, systemic risk, and uncertainty.
Why does confusing hazard and risk lead to bad decisions?
It makes us focus on the wrong things. We might block a valuable innovation out of a vague fear of a hazard, even if the actual risk is very low.
At the same time, it makes us blind to the real risk, because we are too busy concentrating on a hazard. Waiting for a hazard to show up is a common cause of preventable disasters.
Therefore, be cautious: Heuristic concepts like informal taxonomies that fail to distinguish between hazard and risk can create blind spots, exposing you to avoidable liability.
- Claudia Otto (founder of AI Risk Literacy)

Guiding principles
All content on this platform, whether editorial or sponsored, adheres to the universal principles of good scientific practice (e.g. DFG Code of Conduct, Guidelines for Safeguarding Good Research Practice).
In addition to the disclosure of potential conflicts of interest and compliance with law and ethics, this includes in particular the following principles:
01
Robust and verifiable methods
Research content and papers must be based on a scientifically sound and verifiable method. It must be possible to retrace how results were obtained.
02
Continuous, documented quality assurance
Every step of the research is conducted lege artis. Field-specific standards and established methods are observed. Errors are corrected, and flawed work is retracted.
03
Strict honesty & rigorous questioning
This principle requires strict honesty in attributing one's own contributions and those of others, rigorously questioning all findings, and actively promoting a critical discourse.
04
Rigorous research design
The current state of research is comprehensively reviewed and acknowledged. Methods are chosen and applied to actively prevent a distortion of the results (bias)
Activate your AI risk literacy
Share it, talk about it, stay ahead with AI Risk Literacy! Here are three ways to activate your AI risk literacy:

Share your expertise
Are you a professional with a deep understanding of AI-related risk?
AI Risk Literacy is a platform for rigorous, scientifically-grounded analysis. If you are interested in contributing to AI Risk Literacy, e.g. in the Discourse or Partner perspectives sections, get in touch.

Join the conversation
AI risk literacy is a shared endeavor. Share your own insights, articles, and thoughts on AI risk, hazard, and related topics on LinkedIn. Use the hashtag #AIRiskLiteracy to connect with other professionals and contribute to a more informed discourse.
#AIRiskLiteracy

Newsletter: Stay ahead on AI risk
The AI Risk Literacy newsletter offers updates on new contributions to the platform, the latest news, expert analyses and actionable insights.
You can subscribe on LinkedIn to receive the newsletter directly, or simply explore the archive – no subscription required.






