The Infocomm Media Development Authority of Singapore (IMDA) has set up the AI (artificial intelligence) Verify Foundation to harness the collective power and contributions of the global open-source community to develop AI Verify testing tools for the responsible use of AI.
AI Verify is an AI governance testing framework and software toolkit, first developed by IMDA in consultation with companies from different sectors and different scales.
Announcing this at the ATxAI conference, a part of the ongoing Asia Tech x Singapore (ATxSG) event, Singapore’s Minister for Communications and Information, Josephine Teo, said the foundation will boost AI testing capabilities and assurance to meet the needs of companies and regulators globally.
Teo said the launch of AI Verify Foundation will support the development and use of AI Verify to address the risks of AI.
The not-for-profit foundation will:
- Foster a community to contribute to the use and development of AI testing frameworks, code base, standards, and best practices,
- Create a neutral platform for open collaboration and idea-sharing on testing and governing AI, and
- Nurture a network of advocates for AI and drive broad adoption of AI testing through education and outreach
Launched as a minimum viable product for an international pilot last year, AI Verify attracted the interest of over 50 local and multinational companies including IBM, Dell, Hitachi and UBS.
Seven pioneer members of the foundation, IMDA, Temasek's AI Centre of Excellence, Aicadium, IBM, Microsoft, Google, Red Hat and Salesforce will guide the strategic directions and development of AI Verify roadmap.
As a start, the foundation will also have more than 60 general members such as Adobe, DBS, Meta, SenseTime and Singapore Airlines.
Fostering open-source
The foundation will help to foster an open-source community to contribute to AI testing frameworks, code base, standards and best practices and create a neutral platform for open collaboration and idea-sharing on testing and governing AI, the minister added.
AI Verify is now available to the open-source community and will benefit the global community by providing a testing framework and toolkit that is consistent with internationally recognised AI governance principles, for example, those from the European Union (EU), Organisation for Economic Cooperation and Development (OECD), and Singapore, she added.
IMDA noted that jurisdictions around the world coalesced around a set of key principles and requirements for trustworthy AI.
These are aligned with AI Verify’s testing framework which comprises 11 AI governance principles, which are:
- Transparency
- Explainability
- Repeatability/reproducibility
- Safety
- Security
- Robustness
- Fairness
- Data Governance
- Accountability
- Human agency and oversight
- Inclusive growth, social and environmental well-being
The testing processes developed by the foundation comprise technical tests on the three principles, namely, Fairness, Explainability, and Robustness and process checks are applied to all 11 principles.
Integrated software toolkit
IMDA said AI Verify is a single integrated software toolkit that operates within the user’s enterprise environment.
It enables users to conduct technical tests on their AI models and record process checks. The toolkit then generates testing reports for the AI model under test.
User companies can be more transparent about their AI by sharing these testing reports with their shareholders, IMDA added.
AI Verify can currently perform technical tests on common supervised-learning classification and regression models for most tabular and image datasets.
IMDA added that AI Verify does not set ethical standards, nor does it guarantee AI systems tested will be completely safe or free from risks or biases.