On 2 February 2025 the first deadlines under the EU AI Act took effect. We consider where these are likely to sit on your organisational AI risk matrix below.
Ban on prohibited systems: high severity but limited scope
Technologies covered by the ban include use of subliminal techniques, systems that exploit vulnerable groups, biometric categorisation, social scoring, individual predictive policing, facial recognition systems using untargeted scraping, systems inferring emotions in workplaces and educational institutions, and ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, with, in some use cases, qualifying thresholds for the system to be prohibited and certain limited exceptions.
While of course this ban will have a significant impact on providers and deployers of these types of system, most organisations are unlikely to be using prohibited systems. Their attention is likely to be focused on high-risk and other regulated use cases, including general-purpose AI, where obligations start to kick in from August 2025. You can find more detail on the AI Act implementation timeline, and practical steps to prepare for compliance, in our post EU AI Act Published: Dates for Action.
AI literacy: wide scope, little practical guidance
The AI literacy obligation also took effect from 2 February 2025 and is much wider than many appreciate, as our blog post brAIn Teasers: Myth-Busting AI Literacy under the EU AI Act explains.
It applies to providers and deployers (i.e. parties under whose authority an AI system is used) of any AI system (Article 4). This means an organisation is caught if staff use generative AI chatbots, if the organisation is developing or distributing an AI system, or even if it licenses in an AI system for back-office purposes. Article 4 requires in-scope organisations to “take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf” (emphasis added). The recitals explain that AI literacy is a broad concept, covering both “relevant actors in the AI value chain” and “affected persons” and with an overall aim of equipping them to make informed decisions regarding AI systems.
There is, as yet, no EU-level guidance on what the obligation in Article 4 of the AI Act means in practice, particularly in relation to the wider AI value chain and affected persons, although national regulators have begun releasing guidance – for example, the Dutch data protection authority released high-level guidance last week, available (in Dutch only) here. In line with the risk-based approach of the EU AI Act as a whole, we would expect early enforcement efforts to be focused on mitigating the potential for harm to the health, safety or fundamental rights of individuals. The EU AI Office is holding a webinar on AI literacy on 20 February which will cover the initiatives the AI Office foresees to facilitate the implementation of the general requirement in Article 4, which should provide an indication of the likely direction of travel.
The AI literacy requirements must also be considered in the wider context of the AI Act and an organisation’s holistic AI strategy. AI literacy is an integral part of other obligations under the AI Act: for example, effective human oversight of high-risk systems clearly requires that those humans understand the system and its risks, and Article 14(4)(a) explicitly requires those exercising oversight to properly understand the relevant capacities and limitations of the high-risk AI system. Organisations will need to design their AI literacy program to align with their own place in the AI ecosystem and the further obligations that will kick in under the AI Act over the coming months. In addition, many organisations will already have AI training, appropriate use guidance and other measures in place for staff as part of their wider AI governance strategy, and any AI Act-driven literacy efforts will need to dovetail with this strategy.
You can find more practical pointers on compliance with the AI literacy requirements in our previous post here, and you can subscribe to Connect on Tech through the link at the top right of this article to receive further updates.