The EU AI Act introduces a requirement for organisations to ensure AI literacy, and the clock is ticking for putting measures in place. But there are a lot of myths and misconceptions about what that really means.
In this blog post, we tackle the five biggest myths we’ve come across. The main takeaways? Your organisation is caught by this requirement, you don’t have long to put your measures in place, and this is not just about staff training. Bottom line – staff training is great (which is why commentators keep mentioning it), but in reality, AI literacy is likely to require a more holistic approach: consider your AI supply chain, ongoing opportunities for education (e.g. policies, notices and reminders), and regular monitoring and documentation.
Myth 1: My organisation has some time before it needs to act.
Reality: The time is now! This requirement applies from 2 February 2025. You have just four months to ensure you are in a position to comply, and one of the first steps will be to assess current literacy levels.
Myth 2: The AI literacy requirement applies only to certain AI supply chain actors, like providers of high-risk systems.
Reality: This applies to almost everyone.
The AI literacy obligation applies to providers and deployers (i.e. parties under whose authority an AI system is used) of any AI system (Article 4). This means your organisation is caught if staff use generative AI tools like Chat GPT, if your organisation is developing or distributing an AI system, or even if your organisation licenses in an AI system for back-office purposes.
Myth 3: This is about ensuring your staff have sufficient AI literacy.
Reality: This is not just about your staff – think big! Affected persons and your supply chain are in-scope too.
There is plenty of commentary stating that this requirement is met by ensuring AI literacy amongst staff, and it’s true that ensuring measures are in place for staff will be key. But in reality, Article 4 is much wider than this: “Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf” (emphasis added).
It’s true that the recitals make clear that AI literacy should equip providers and deployers with skills and knowledge, but they doesn’t stop there. Recital 20 expressly calls out ensuring that both “relevant actors in the AI value chain” and “affected persons” should be considered too (Recital 20).
So who does this capture in practice? Consider putting in place policies for the persons who may be affected by your AI systems, your third party vendors, and any downstream supply chain actors. Have something to point to and emphasise inclusivity.
Myth 4: This is all about putting training in place.
Reality: Training is great, but you can also think beyond training where you can (e.g. policies, guidance, reminders, contractual assurances, notices and pop-ups).
Article 4 refers to putting “measures” in place, but nowhere does the AI Act limit this to training. It’s true that training will be key, but think outside the box where you can. Could individuals benefit from written guidance and policies? Do some individuals need regular reminders about key points when accessing AI tools for the first time? Could this be achieved by pop-ups and reminders? What warranties do you want in your supply chain?
Organisations may want to identify AI literacy champions to embrace these initiatives, and documentation will be key: consider monitoring progress, assessing effectiveness, and ensuring records are kept for auditing and accountability purposes.
Myth 5: This is a blanket, absolute requirement – an off-the-shelf solution is the answer.
Reality: This is not about creating an army of AI gurus. Tailor your measures to different audiences and contexts.
Providers and deployers need to ensure, “to the best extent”, a “sufficient level” of AI literacy. This is not an absolute requirement, and providers and deployers are able to take into account:
- the technical knowledge, experience, education and training of their staff and other persons dealing with the operation and use of AI.
- the context the AI systems are to be used in.
- the persons on whom the AI systems are to be used.
The aim is for providers, deployers and affected persons to:
- make an informed deployment of AI systems.
- gain awareness about the opportunities and risks of AI and possible harms (Article 3(56)).
Organisations may consider foundational content with supplemental tailored learning paths for specific roles, but a generic, off-the-shelf training deck for all your staff will likely not do if something goes wrong.
What’s next?
The AI literacy requirement will be subject to further fleshing out by legislators. We can expect voluntary codes of conduct regarding AI literacy in the future from the European Commission and Member States. The AI Board will be promoting AI literacy tools, public awareness and understanding of AI systems. However, it’s likely your organisation will need to start implementing compliance measures before this kind of guidance is available.
If your organisation is considering how to comply with AI literacy requirements and you’d like to discuss this, please get in touch.