Add a bookmark to get started

27 May 20256 minute read

EU AI Act Update

AI Literacy: Questions & Answers

Since our last EU AI Act update, the European Commission has published a Q&A catalogue focused on AI Literacy. The Q&A aims to clarify mandatory AI Literacy requirements outlined in Article 4 of the AI Act and offers practical guidance to help organizations understand and meet their obligations.

In this edition, we highlight key takeaways and set out some immediate steps to take.

  • AI Literacy is mandatory and enforceable: As of 2 February 2025, organizations must ensure that their staff – and individuals acting on their behalf, such as freelancers – have been furnished with adequate AI Literacy when using or operating AI systems. From 2 August 2026, national market surveillance authorities are expected to begin supervising and enforcing compliance in accordance with national implementation laws. Member states are expected to adopt national enforcement laws by 2 August 2025, which may include penalties and other sanctions for non-compliance. Private enforcement may also apply according to national laws, e.g. allowing individuals to seek damages if they suffer harm due to an organization’s failure to meet its AI Literacy obligations.
  • AI Literacy impacts your organization – and your AI value chain: Providers and deployers of AI systems are responsible for ensuring AI Literacy among staff and other persons acting on their behalf – such as contractors, service providers, or, in some cases, even customers who interact with AI systems under their supervision. As the AI Act applies both inside and outside the EU as long as an AI system is placed onto the EU market, used in the EU or its use has an impact on people located in the EU, an organization’s obligation to ensure sufficient AI Literacy may also concern their AI value chain.
  • There is no one-size-fits-all solution: Organisations must assess their role as provider or deployer of AI systems and the risks of their AI systems. AI Literacy measures are only “sufficient” if they are proportionate to the risk level of the specific AI use case at hand. This needs to consider circumstances such as the AI system’s purpose, the operational context, market conditions, and the baseline skillset of the trained individuals. For example, if an organization’s AI system is classified as “high-risk” under Article 6 of the AI Act, it must provide tailored training to address and mitigate associated risks. In contrast, lower-risk applications – such as basic content generation using large language models – generally require less extensive training.
  • Minimum AI Literacy program content: Designing tailored AI Literacy programs can require significant efforts. To support organizations, the Q&A outlines some content elements that can serve as a foundational starting point. As a minimum, providers and deployers of AI systems should:

    Ensure a general understanding of AI within the organization:
    What is AI? How does it work? Which specific AI is used or deployed in or distributed by our organization?
    ➔ What are the opportunities and risks coming along with these activities?

    Consider the organization’s role (i.e. provider or deployer of AI systems):
    Is our organization developing AI systems or just using AI systems developed by another organization?

    Consider the risk of the AI systems provided or deployed:
    What do employees and/or suppliers need to know for their activities concerning the particular AI system? What are the risks they need to be aware of and do they need to be aware of mitigation measures?

    Specifically build AI Literacy actions on the preceding analysis, considering
    specifics in technical knowledge, experience, education and training of staff and other individuals:
    How much does the employee/person know about AI and the relevant AI system? What else should they know?
    as well as the context each particular AI system is to be used in and affected individuals:
    Which sector and purpose/service is affected?

  • The Q&A’s pragmatic stance clears up some misconceptions: The AI Office will not impose rigid requirements. This is particularly relevant at this early stage with formal training standards still evolving. Importantly, there is no general obligation for organizations to obtain external certification, or implement broad, one-size-fits-all training across all levels. This stands in contrast to some overreaching claims in the market and gives organizations welcome flexibility in designing their concepts and measures to implement adequate AI Literacy.

 

Immediate action items

  • Assess the risk scenario around AI that affects your organization.
  • Create a modular AI Literacy training program based on the results of this risk assessment.
  • Identify, categorize, and analyse relevant target groups – i.e. staff and other individuals – and their touchpoints with AI systems.
  • Align your AI Literacy content with each group’s proximity to AI systems and the risks involved.
  • Document training efforts for all relevant individuals.
  • Review and update contracts and related documentation to reflect AI Literacy obligations across your AI value chain.
  • The Commission regularly publishes guidance. The Living Repository of AI Literacy Practices may help you as a worthy source of inspiration (click here).

 

Need support translating these action points into practice?

We support organizations in designing compliant AI Literacy programs, conduct risk assessments, and develop tailored training strategies that meet the EU AI Act's evolving requirements. Feel free to reach out anytime.

 

Stay tuned for more updates on the EU AI Act Timeline!

The next set of AI Act regulations (including, but not limited to, on GPAI, governance and sanctions) applies from 2 August 2025. For further information, also see our AI Focus page or contact us.

Print