Analyzing critical legal trends and developments across data, cyber, AI and digital regulations from around the world and beyond borders

On 2 February 2025 the first deadlines under the EU AI Act took effect. We consider where these are likely to sit on your organisational AI risk matrix below.

Ban on prohibited systems: high severity but limited scope

Technologies covered by the ban include use of subliminal techniques, systems that exploit vulnerable groups, biometric categorisation, social scoring, individual predictive policing, facial recognition systems using untargeted scraping, systems inferring emotions in workplaces and educational institutions, and ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, with, in some use cases, qualifying thresholds for the system to be prohibited and certain limited exceptions.

While of course this ban will have a significant impact on providers and deployers of these types of system, most organisations are unlikely to be using prohibited systems. Their attention is likely to be focused on high-risk and other regulated use cases, including general-purpose AI, where obligations start to kick in from August 2025. You can find more detail on the AI Act implementation timeline, and practical steps to prepare for compliance, in our post EU AI Act Published: Dates for Action.

AI literacy: wide scope, little practical guidance

The AI literacy obligation also took effect from 2 February 2025 and is much wider than many appreciate, as our blog post brAIn Teasers: Myth-Busting AI Literacy under the EU AI Act explains.

It applies to providers and deployers (i.e. parties under whose authority an AI system is used) of any AI system (Article 4). This means an organisation is caught if staff use generative AI chatbots, if the organisation is developing or distributing an AI system, or even if it licenses in an AI system for back-office purposes. Article 4 requires in-scope organisations to “take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf” (emphasis added). The recitals explain that AI literacy is a broad concept, covering both “relevant actors in the AI value chain” and “affected persons” and with an overall aim of equipping them to make informed decisions regarding AI systems.

There is, as yet, no EU-level guidance on what the obligation in Article 4 of the AI Act means in practice, particularly in relation to the wider AI value chain and affected persons, although national regulators have begun releasing guidance – for example, the Dutch data protection authority released high-level guidance last week, available (in Dutch only) here. In line with the risk-based approach of the EU AI Act as a whole, we would expect early enforcement efforts to be focused on mitigating the potential for harm to the health, safety or fundamental rights of individuals. The EU AI Office is holding a webinar on AI literacy on 20 February which will cover the initiatives the AI Office foresees to facilitate the implementation of the general requirement in Article 4, which should provide an indication of the likely direction of travel.

The AI literacy requirements must also be considered in the wider context of the AI Act and an organisation’s holistic AI strategy. AI literacy is an integral part of other obligations under the AI Act: for example, effective human oversight of high-risk systems clearly requires that those humans understand the system and its risks, and Article 14(4)(a) explicitly requires those exercising oversight to properly understand the relevant capacities and limitations of the high-risk AI system. Organisations will need to design their AI literacy program to align with their own place in the AI ecosystem and the further obligations that will kick in under the AI Act over the coming months. In addition, many organisations will already have AI training, appropriate use guidance and other measures in place for staff as part of their wider AI governance strategy, and any AI Act-driven literacy efforts will need to dovetail with this strategy.

You can find more practical pointers on compliance with the AI literacy requirements in our previous post here, and you can subscribe to Connect on Tech through the link at the top right of this article to receive further updates.

Author

Ben works with clients on matters involving the cross-over space of media, IP and technology. His practice has a particular focus on artificial intelligence, data protection, copyright and technology disputes. He has a particular expertise in intermediary liability issues.

Author

Vin leads our London Data Privacy practice and is also a member of our Global Privacy & Security Leadership team bringing his vast experience in this specialist area for over 22 years, advising clients from various data-rich sectors including retail, financial services/fin-tech, life sciences, healthcare, proptech and technology platforms.

Author

Elisabeth is a partner in Baker McKenzie's Brussels office. She advises clients in all fields of IT, IP and new technology law, with a special focus on data protection and privacy aspects. She regularly works with companies in the healthcare, finance and transport and logistics sectors.

Author

Cristina Duch is a partner in the Intellectual Property Practice Group in Barcelona and leader of the EMEA Brand Enforcement & Disputes Practice Group of the Firm. She has significant experience in a wide range of intellectual property matters, with particular emphasis on trademarks, designs, unfair competition and advertisement. She is a member of the Spanish Institute of Chartered Industrial Property Agents.

Author

Dr. Lukas Feiler, SSCP, CIPP/E, has more than eight years of experience in IP/IT and is a partner and head of the IP and IT team at Baker McKenzie • Diwok Hermann Petsche Rechtsanwälte LLP & Co KG in Vienna. He is a lecturer for data protection law at the University of Vienna Law School and for IT compliance at the University of Applied Science Wiener Neustadt.

Author

Francesca Gaudino is the Head of Baker McKenzie’s Information Technology & Communications Group in Milan. She focuses on data protection and security, advising particularly on legal issues that arise in the use of cutting edge technology.

Author

Sue is a Partner in our Technology practice in London. Sue specialises in major technology deals including cloud, outsourcing, digital transformation and development and licensing. She also advises on a range of legal and regulatory issues relating to the development and roll-out of new technologies including AI, blockchain/DLT, metaverse and crypto-assets.

Author

José María Méndez es socio responsable del área de Propiedad Intelectual y Tecnologías de la Información y Comunicaciones de Baker & McKenzie Madrid. Anteriormente, fue socio del área de Propiedad Intelectual en un despacho internacional, así como secretario general adjunto de Sogecable y director de la asesoría jurídica del área de cinematografía y televisión. Participa con frecuencia en actividades sin ánimo de lucro de organizaciones como Caritas Diocesanas y Aldeas Infantiles. Asimismo, imparte clases en el Máster de Propiedad Intelectual de la Universidad Carlos III.

Author

Prof. Dr. Michael Schmidl is co-head of the German Information Technology Group and is based in Baker McKenzie's Munich office. He is an honorary professor at the University of Augsburg and specialist lawyer for information technology law (Fachanwalt für IT-Recht). He advises in all areas of contentious and non-contentious information technology law, including internet, computer/software, data privacy and media law. Michael also has a general commercial law background and has profound experience in the drafting and negotiation of outsourcing contracts and in carrying out compliance projects.

Author

Eva-Maria Strobel is a partner in Baker McKenzie's Zurich office. She is a member in the Firm's global IPTech Practice Group, chairs the EMEA IPTech Practice Group and heads the Swiss IPTech team. focuses on the development of intellectual property strategies to procure, protect and commercialize her domestic and multinational client's intangible assets and to grow the return on investment.

Author

Florian Tannen is a partner in the Munich office of Baker McKenzie. He advises on all areas of contentious and non-contentious information technology law, including internet, computer/software and data privacy law.

Author

Kathy Harford is the Lead Knowledge Lawyer for Baker McKenzie’s global IP, Data & Technology practice.

Author

Karen Battersby is Director of Knowledge for Industries and Clients and works in Baker McKenzie's London office.