Analyzing critical legal trends and developments across data, cyber, AI and digital regulations from around the world and beyond borders

On 11 February 2025, the Singapore government announced new AI safety initiatives, namely: (i) the Global AI Assurance Pilot for best practices around technical testing of generative AI (“GenAI“) applications; (ii) the Joint Testing Report with Japan; and (iii) the publication of the Singapore AI Safety Red Teaming Challenge Evaluation Report. These initiatives aim to enhance AI governance, innovation and safety standards.


At the recent Global AI Summit held in France, Singapore unveiled a series of AI safety initiatives designed to reflect its commitment to rallying industry and international partners toward concrete actions that advance AI safety.

First, the Global AI Assurance Pilot was launched by the AI Verify Foundation and the Infocomm Media Development Authority (IMDA). This is intended to serve as a testbed to establish global best practices around the technical testing of GenAI applications. The pilot will pair AI assurance and testing providers with organizations deploying GenAI applications, focus on technical testing of real-life applications and use lessons learned to create generalizable insights on “what and how to test.”

The insights from the pilot would help develop testing norms and best practices, lay the foundations for a viable AI assurance market and enhance AI testing tools. The pilot is expected to be completed by May 2025 and will be showcased at Asia Tech x Singapore 2025.

Second, the Joint Testing Report, developed in collaboration with Japan, was released. The report aims to make large language models (LLMs) safer in different linguistic environment by evaluating whether guardrails hold up in non-English settings. Singapore contributed by bringing global linguistic and technical experts to conduct tests across 10 languages and five harm categories to build up evaluation capabilities and methodological standards.

Third, the IMDA published the Singapore AI Safety Red Teaming Challenge Evaluation Report, which provides an understanding of how LLMs perform with regard to different languages and cultures across Asia Pacific, and if the safeguards hold up in these different contexts. In particular, the report presents a consistent methodology for testing across diverse languages and cultures.

Key takeaway

Singapore is making strides in enhancing its AI safety and governance frameworks through these new initiatives that have been announced, with a view to promoting transparency and accountability even as more businesses adopt AI systems.

Author

Andy Leck is the head of the Intellectual Property (IP) Practice Group and a member of the Dispute Resolution Practice Group in Singapore. He is a core member of Baker McKenzie's regional IP practice and leads the Myanmar IP Steering Committee.

Author

Ren Jun Lim represents local and international clients in both contentious and non-contentious intellectual property matters. He also advises on a full range of healthcare, as well as consumer goods-related legal and regulatory issues.

Author

Ken Chia is a member of the Firm’s IP Tech, International Commercial & Trade and Competition Practice Groups. He is an IAPP Certified International Privacy Professional (FIP, CIPP(A), CIPT, CIPM) and a fellow of the Chartered Institute of Arbitrators and the Singapore Institute of Arbitrators. His practice focuses on IT, telecommunications, intellectual property, trade and commerce, and competition law matters.

Author

Sanil is a local principal in the Intellectual Property & Technology Practice Group in Baker McKenzie Wong & Leow.

Author

Daryl Seetoh is a local principal in the Intellectual Property & Technology (IPTech) Practice Group at Baker McKenzie Wong & Leow.