There's risk in AI literacy.
What is AI literacy?
Art. 4 EU AI Act
With Art. 4, the EU AI Act includes the general obligation of providers and deployers of AI systems to take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf. The EU AI Act, however, does not clarify what measures must be taken.
Regarding the selection, implementation, and continuous review of measures, providers and deployers of AI systems have to take into account these persons' individual technical knowledge, experience, education and training.
Also, providers and deployers of AI systems have to take into account the context the AI systems are to be used in when defining, implementing, and reviewing those measures.
To make sure the EU AI Acts' main goal to protect fundamental rights can be achieved, the persons or groups of persons on whom the AI systems are to be used have to be taken into consideration in general.
From AI literacy to AI risk literacy
According to the definition in Art. 3 (56) EU AI Act,
"AI literacy" means skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.
AI literacy therefore contains the skills, knowledge and understanding, that allows one to gain awareness about "risks of AI and possible harm it can cause".
While "risk" here means "hazard", the intention of the lawmakers is clear: AI literacy must reflect relevant competence with regard to hazards and risk in the context of AI systems as described in Art. 4 EU AI Act.
AI risk literacy can therefore be defined as follows:
"AI risk literacy" means the knowledge and understanding of risk as a concept and of its constituent factors, the skills to recognise these factors and their interplay, as well as the skills to apply the relevant methodologies to assess and manage risk in practice.
Notably, AI risk literacy is purposely not limited to certain actors, AI systems, or to the EU AI Act.
The many faces of AI (risk) literacy
AI literacy must be defined on two distinct, interconnected levels: the organisational and the individual one.
The provider or deployer, often an organisation, has the obligation to ensure that the necessary skills, knowledge, and understanding are available when and where needed. This means, AI literacy is an organisational obligation; it is about organising a sufficient knowledge basis within the entire system of the organisation. Since each organisation is a system of its own with very individual traits, the organisational measures cannot be generalized.
Also, AI literacy must address personal competence and individual capabilities. Hence, as described in Art. 4 EU AI Act, the measures to enable those individual competences and capabilities need to be applied individually, based on the specific person's profile and, most importantly, their role. Personnel directly involved in risk management, in particular risk assessment (i.e., risk assessors), must possess a professional, technical understanding of risk and how it can be determined or mitigated. Taking into account the interdisciplinary nature of AI as a research field, new risk assessment roles might have to be created to focus on specific aspects or external experts have to be consulted, even trained. Conversely, general operational staff primarily require a sufficient foundational understanding to precisely use risk-related terminology (hazard, risk) for accurate communication and adherence to protocols. This individuality of competence profiles prohibits a simple, standardized approach.
AI literacy, in particular AI risk literacy, can also be understood as an organisation’s leader’s realisation that, due to the interdisciplinary nature, it is a team sport and processes and protocols have to be implemented.
AI risk literacy goes beyond AI literacy
The reality, however, demonstrates that risk understanding is poorly distributed in general and across organisations, potentially leading to avoidable incidents.
The professional competence required for adequate risk assessment can rarely be provided by a single person, particularly given the broad applicability of AI and various exposures and vulnerabilities of affected groups or people. Latest damage cases, particularly those observed in the USA, indicate that the severe consequences could have been avoided through AI risk literacy and proper, interdisciplinary risk assessment and management.
The organisational obligation to ensure AI risk literacy is not a new legal concept. It is the hazards related to AI technologies that necessitate new competences and capabilities. This "update" requirement must also be fulfilled under established laws.
The lack of AI risk literacy is a global issue
The sometimes disproportionate discussions about risk in connection with AI technologies largely stem from a lack of understanding of risk and a deficit in fundamental scientific, field-specific knowledge rooted in sciences different from the field of AI (e.g., biology, psychology, social, and disaster sciences).
This competence gap drives and benefits fear-inducing, counterproductive narratives about, e.g., systemic risk, Existential Risk, Synthetic Biology, and unproven scenarios to thrive, displacing science-based risk discourse, regulation, and assessment.
Hence, the lack of AI risk literacy is a global problem which, collectively, is a greater hazard to humanity than AI technologies.
Recommendations
Begin by internalising the core insights of the AI Risk Literacy briefings.
Do not mistake standardised, generic training for legal compliance. Training is only one measure of many and there might be more effective ways to achieve AI (risk) literacy. This is also why the EU AI Act contains no general training obligation and no directly related sanction.
Always keep in mind that AI literacy is highly individual and cannot be confirmed by a coach or external institution. What can be confirmed is participation or the successful testing of course content.
Author
Claudia Otto
As a lawyer and researcher, Claudia specializes in AI safety, security, and risk assessment under the EU AI Act, the subject of her Master's thesis in Security and Disaster Management (MBA).
Need guidance on risk and compliance?

Cite this briefing
Otto, AI Risk Literacy, What is AI literacy?, September 2025
