As the AI Seoul Summit commenced on May 21, 2024, US Secretary of Commerce Gina Raimondo, unveiled a strategic vision by the United States Artificial Intelligence Safety Institute (AISI), outlining plans to advance AI safety and responsible AI innovation, at a time of “extraordinary advancement in artificial intelligence,” which has seen AI become more powerful, more capable and widely adopted.
The AISI was launched in 2023 by the National Institute of Standards and Technology (NIST), which sits within the Department of Commerce, bringing together some of the brightest minds in academia, industry and government to help advance our understanding and mitigate the risks posed by advanced AI, so that we can all harness its benefits.
The strategy is grounded by two core principles. The first being that beneficial AI depends on AI safety; the second, that AI safety depends on science. With this in mind, the AISI plans to address a number of key challenges, including:
- A lack of standardized definitions and metrics for AI safety
- Underdeveloped testing, evaluation, validation, and verification methods and best practices.
- Absence of established risk mitigations across the AI design and deployment lifecycle
- Limited coordination on AI safety issues, both nationally and internationally
To achieve its ambitions, the AISI intends to focus its attention towards three key goals:
- Advancing the science of AI safety through the development of empirically grounded tests, benchmarks, and evaluations of AI models, systems, and agents to find practical solutions for both near and long-term AI safety challenges.
- Articulating, demonstrating, and disseminating the practices of AI safety by building and publishing specific metrics, evaluation tools, methodological guidelines, protocols, and benchmarks for assessing risks of advanced AI.
- Supporting institutions, communities, and coordination of AI safety through an integrated ecosystem of diverse disciplines, perspectives, and experiences, and by promoting the adoption of guidelines, evaluations and recommended safety measures and risk mitigations.
Alongside the strategic vision, Secretary Raimondo announced plans to launch a global scientific network for AI safety through meaningful engagement with AI Safety Institutes and other government-backed scientific offices with an AI safety-focus and commitment to international cooperation. Expanding on previously announced collaborations with Institutes in Japan, Canada, Singapore and the UK.
The AISI intends to bring together international AI Safety Institutes and other stakeholders in the San Francisco area later this year.
“Recent advances in AI carry exciting, lifechanging potential for our society, but only if we do the hard work to mitigate the very real dangers of AI that exist if it is not developed and deployed responsibly.” says U.S. Secretary of Commerce, Gina Raimondo, adding that “Safety fosters innovation, so it is paramount that we get this right and that we do so in concert with our partners around the world to ensure the rules of the road on AI are written by societies that uphold human rights, safety, and trust.”
Read the full document.