Analyzing critical legal trends and developments across data, cyber, AI and digital regulations from around the world and beyond borders

This brief article considers the Australian Parliament’s House of Representatives Standing Committee on Employment, Education and Training (Committee)“Future of Work Report” published on 11 February 2025 in an international context, with a focus on its key recommendations relating to the regulation of artificial intelligence (AI) and automated decision making (ADM) technologies in the workplace.

In Australia and internationally, governments and industries are grappling with how to effectively leverage emerging AI and ADM technologies for growth and innovation while managing the associated risks.

One particular area where the risks associated with AI and ADM are attracting increasing scrutiny from regulators is in the workplace; AI and ADM technologies are already having a significant impact on workplaces and workforces around the world, and we only expect this to intensify through the course of this year with increasing adoption of AI technologies at scale. While AI technology can create significant efficiencies which can greatly benefit workers it also has the potential to negatively affect employees and damage employers’ reputations. AI self-serve at work, which gives employees instantaneous responses to their HR queries, is an example of an application of AI in the workplace that can offer benefits to workers. In contrast, payroll technology which uses AI can be ill-suited to Australia’s complex compliance requirements and requires appropriate human oversight. The same can be said for AI-assisted and automated recruitment (such as resume screening) and performance assessment tools which may offer attractive efficiencies and help to streamline review processes but also have the potential to reinforce harmful biases.

In Australia, we have seen recent regulatory changes in relation to AI and ADM, and more are expected soon. In particular, in September last year the Australian Government introduced a Voluntary AI Safety Standard comprising 10 voluntary guardrails for development and deployment of AI technology focused on testing, transparency and accountability requirements and proposed 10 mandatory guardrails for AI in high-risk settings. Shortly thereafter, the first tranche of the Australian Government’s significant reforms to the Privacy Act 1988 (Cth) was enacted, including new provisions that focus on ADM – such as an obligation for businesses that use personal information in ADM processes to notify an individual if the resulting decisions would reasonably be expected to significantly affect the rights or interests of the individual. The final report of the Australian Senate’s Select Committee on Adopting Artificial Intelligence on AI in Australia also foreshadowed more significant regulatory reforms, recommending that the Australian Government introduce new, whole-of-economy, dedicated legislation to regulate “high-risk” uses of AI.

Against this backdrop, as momentum continues to build in Australia towards the adoption of comprehensive AI-specific regulation, the Committee has turned its attention to the current and potential impacts (including risks and opportunities) of AI and ADM on workers and the workplace in its “Future of Work Report” (Report) published on 11 February 2025.

In the Report, the Committee evaluates the effectiveness of current and emerging regulatory frameworks to build public trust in technology deployed in the workplace and ensure that benefits of AI and ADM technologies, such as increased productivity and efficiency, job creation and augmentation, are shared between employers and workers. The Committee observes that digital transformation has exposed significant risks and gaps in Australia’s regulatory frameworks and workplace protections, especially in relation to data and privacy, and makes 21 recommendations aimed at maximising the benefits of AI and ADM while addressing associated risks related to data privacy, job displacement, and algorithmic bias.

Notably, the Report recommends that the Australian Government classify all AI systems used for employment related purposes – including systems used for recruitment, referral, hiring, remuneration, promotion, training, apprenticeship, transfer or termination – as “high-risk” systems which would be subject to proposed mandatory guidelines for high-risk AI. Other significant recommendations include recommendations to review and reform the Privacy Act 1988 (Cth) (Privacy Act) and the Fair Work Act 2009 (Cth) (the Fair Work Act) to mitigate specific emerging AI and ADM related workplace risks, including to protect worker data and privacy and to improve transparency, accountability and procedural fairness regarding the use of AI and ADM in the workplace.

The Committee also makes several more general recommendations, such as for the Australian Government to require developers to demonstrate that AI systems have been developed using lawfully obtained data that does not breach Australian intellectual property (IP) or copyright laws, and to support the continued creation of Australian-owned IP for technology platforms by establishing an AI Fund for businesses engaged in that work.

Looking Forward

In Australia, employers should anticipate upcoming regulatory changes that may significantly affect their use of AI and ADM technologies in the workplace. Having regard to the Committee’s concerns and recommendations, employers should consider in particular:

  • reviewing how they are using or planning to use AI or ADM technologies to process personal information and inform their employment-related decision-making;
  • assessing procedures currently in place to maintain transparency and explainability of decisions; and
  • evaluating the efficacy of their current data governance measures.

Considering the potential of AI and ADM technologies for transformation and disruption in the workplace, the Committee’s focus on the risks and opportunities presented by AI and ADM is not unique to Australia but rather reflective of concerns that governments and employers all over the world are increasingly grappling with, and may be of broader interest to employers operating internationally.

Highlights of the Committee’s AI-related recommendations include:

  • All AI systems used for employment purposes to be classified as ‘high-risk’ (Recommendation 1): The Committee recommends that the Government should classify AI systems used for employment-related purposes – including recruitment, referral, hiring, remuneration, promotion, training, apprenticeship, transfer or termination – as ‘high-risk’ systems. Based on the Committee’s recommendation, such systems would be subject to the proposed mandatory guidelines for high-risk AI proposed by the Department of Industry, Science and Resources in September 2024.
  • Reforms to the Fair Work Act and Privacy Act (Recommendations 2, 11, 15):
  • The Fair Work Act to be:
    • reviewed to:
      • ensure decision-making using AI and ADM is covered under the Fair Work Act, and that employers remain liable for these decisions (Rec 2); and
  • amended to improve transparency, accountability and procedural fairness regarding how AI and ADM systems are used (Rec 15) including by:
    • requiring organisations to disclose the use of AI and ADM systems to existing and prospective workers and customers;
    • developing a legislative ‘right to explanation’; and
    • banning the use of technologies, like AI and ADM systems, for final decision-making without any human oversight – especially for human resourcing decisions.
  • The Fair Work Act and Privacy Act to be reviewed together to protect workers, their data and privacy by:
    • banning high-risk uses of worker data, including disclosures to technology developers
    • prohibiting the sale to third parties of workers’ personal data and any data collected in connection to work or undertaken during employment;
    • requiring meaningful consultation and transparency with workers on the use of surveillance measures and data used by AI systems in the workplace; and
    • empowering the Fair Work Commission to manage the dispute resolution process for complaints relating to breaches of workers’ privacy obligations.
  • Better protections in relation to workplace and technological surveillance (Recommendation 12): The Australian Government to work with states and territories to enhance protections against excessive and unreasonable surveillance in the workplace and prohibit employers from using technological surveillance in relation to an employee’s protected attribute.
  • A Code of Practice to address AI-related workplace health and safety risks (Recommendation 13): The Australian Government to work with Safe Work Australia to develop a Code of Practice that identifies and addresses specific work health and safety risks associated with AI and ADM. This includes establishing limits on the use of AI and ADM in workplaces to mitigate psychosocial risks.
  • Requiring AI developers to demonstrate that AI systems have been developed using lawfully obtained data that does not breach Australian intellectual property or copyright laws (Recommendation 17)
  • Requiring AI developers and deployers (employers) to implement measures against algorithmic bias(Recommendation 20). This would include:
    • using more diverse training datasets and managing rules around user prompts; and
    • conducting regular mandatory independent audits to assess the extent and impacts of algorithmic bias.
  • Greater obligations on employers to consult workers when introducing new technology (Recommendation 18): Employers to be required to consult workers on major workplace changes before, during, and after the introduction of new technology. This would include consideration of whether the introduction of new technology is fit for purpose and does not unduly disadvantage workers.
  • Extending the positive duty currently found in the Sex Discrimination Act to all attributes protected by the Fair Work Act (Recommendation 21).
Author

Jarrod Bayliss-McCulloch is a senior associate in the Information Technology & Commercial department at the London office of Baker McKenzie and advises on major technology-driven transactions and regulatory issues spanning telecommunications, intellectual property, data privacy and consumer law with a particular focus on digital media and new product development.

Author

Adrian is the Head of the Firm's Asia Pacific Technology, Media & Telecommunications Group. His practice focuses on advising on online and offline media interests including digital copyright, data and information transfer, content and advertising regulation, consumer protection, defamation, online payment systems and transaction engines, online gambling, website risk minimisation measures, online security and cryptography, securities licensing, and trade marks and domain names.

Author

Lucienne Gleeson is a partner in Baker McKenzie's Employment Practice in Sydney.