On 11 February 2025, the Singapore government announced new AI safety initiatives, namely: (i) the Global AI Assurance Pilot for best practices around technical testing of generative AI (“GenAI“) applications; (ii) the Joint Testing Report with Japan; and (iii) the publication of the Singapore AI Safety Red Teaming Challenge Evaluation Report. These initiatives aim to enhance AI governance, innovation and safety standards.
At the recent Global AI Summit held in France, Singapore unveiled a series of AI safety initiatives designed to reflect its commitment to rallying industry and international partners toward concrete actions that advance AI safety.
First, the Global AI Assurance Pilot was launched by the AI Verify Foundation and the Infocomm Media Development Authority (IMDA). This is intended to serve as a testbed to establish global best practices around the technical testing of GenAI applications. The pilot will pair AI assurance and testing providers with organizations deploying GenAI applications, focus on technical testing of real-life applications and use lessons learned to create generalizable insights on “what and how to test.”
The insights from the pilot would help develop testing norms and best practices, lay the foundations for a viable AI assurance market and enhance AI testing tools. The pilot is expected to be completed by May 2025 and will be showcased at Asia Tech x Singapore 2025.
Second, the Joint Testing Report, developed in collaboration with Japan, was released. The report aims to make large language models (LLMs) safer in different linguistic environment by evaluating whether guardrails hold up in non-English settings. Singapore contributed by bringing global linguistic and technical experts to conduct tests across 10 languages and five harm categories to build up evaluation capabilities and methodological standards.
Third, the IMDA published the Singapore AI Safety Red Teaming Challenge Evaluation Report, which provides an understanding of how LLMs perform with regard to different languages and cultures across Asia Pacific, and if the safeguards hold up in these different contexts. In particular, the report presents a consistent methodology for testing across diverse languages and cultures.
Key takeaway
Singapore is making strides in enhancing its AI safety and governance frameworks through these new initiatives that have been announced, with a view to promoting transparency and accountability even as more businesses adopt AI systems.