Analyzing critical legal trends and developments across data, cyber, AI and digital regulations from around the world and beyond borders

The first substantive provisions of the AI Act entered into force on February 2, 2025 (see our post EU AI Act Published: Dates for Action for a more detailed timeline). The AI Act adopts a risk-based approach, categorizing AI systems based on their associated risks, and several employment-related use cases are considered as higher risk or prohibited entirely. Consequently, employers should authorize the use of AI systems in the workplace only after a thorough legal review.


Here is a summary of the key considerations for employers when implementing AI systems in the workplace:

Obligations already in force

  • Employers must ensure that their staff possess the necessary knowledge and skills to operate the AI systems used by the company. This AI competence can be developed through information sessions and regular training courses. Employers must also provide employees with the operating instructions for any AI system.
  • Certain AI systems, such as those designed to recognize employees’ emotions, are prohibited in the workplace from the outset.

Obligations you should be preparing for now

  • AI systems used in the HR sector are frequently deemed high-risk. High-risk AI systems include those used for assessing, selecting, or recruiting applicants, as well as systems making decisions related to promotions, dismissals, or working conditions.
  • High-risk AI systems require human oversight. The supervisor must have the necessary skills and training and be authorized to make any necessary corrections when using the system.
  • Data entered into the AI system by companies must fulfill the system’s purpose and be representative. Data is considered representative if all groups for whom the AI system is intended are adequately reflected statistically.
  • Prior to deploying a high-risk AI system, it is essential to inform the works council, if any, and the affected employees about its use. Furthermore, if the AI system is capable of monitoring or processes personal employee data, a works council agreement must be established.

Recommendation

We advise companies to proactively establish a strong compliance system for their AI systems. Without such a system, the benefits of AI may be outweighed by the significant risk of penalties under the AI Act, which can impose fines of up to EUR 35 million or 7% percent of annual global turnover for violations.

Click here to read the German version.

Author

Dr. Lukas Feiler, SSCP, CIPP/E, heads the Firm’s Commercial, Data, IPTech and Trade practice in Vienna. He is specialized in technology litigations, focusing on regulatory and civil disputes in the areas of data protection, AI, and platform regulation. Building on his litigation expertise, Lukas advises clients on strategic compliance issues in the areas of cyber security, data protection, and AI. Lukas also leads the AI Desk in Vienna and is a member of the Firm’s EMEA Data Privacy & Security leadership team. Lukas regularly represents clients before the Austrian Supreme Court, the Austrian Administrative Supreme Court, the European Commission, and the EU’s General Court and the CJEU.

Author

Philipp Maier is partner and head of the Baker McKenzie Employment Law Practice Group in Vienna.

Author

Andrea Haiden is Senior Associate in Vienna and has over eight years of legal experience in Employment Law.

Author

Mag. Adrian Brandauer is a junior associate of Baker McKenzie's IPTech Team in Vienna.