.pp-multiple-authors-boxes-wrapper {display:none;}
img {width:100%;}
In a bid to address the challenges associated with the development and deployment of AI, the National Institute of Standards and Technology (NIST) has formed a new consortium.
This development was announced in a document published to the Federal Registry on November 2, alongside an official notice inviting applications from individuals with the relevant credentials.
The document states, “This notice is the initial step for NIST in collaborating with non-profit organisations, universities, other government agencies, and technology companies to address challenges associated with the development and deployment of AI.”
The primary objective of this collaboration is to create and implement specific policies and measurements that ensure a human-centred approach to AI safety and governance within the United States.
Collaborators within the consortium will be tasked with a range of functions, including the development of measurement and benchmarking tools, policy recommendations, red-teaming efforts, psychoanalysis, and environmental analysis.
NIST’s initiative comes in response to a recent executive order issued by US President Joseph Biden, which outlined six new standards for AI safety and security.
While European and Asian countries have been proactive in instituting policies governing AI systems concerning user and citizen privacy, security, and potential unintended consequences, the US has lagged.
President Biden’s executive order and the establishment of the Safety Institute Consortium mark significant strides in the right direction, yet there remains a lack of clarity regarding the timeline for the implementation of laws governing AI development and deployment in the US.
Many experts have expressed concerns about the adequacy of current laws, designed for conventional businesses and technology, when applied to the rapidly-evolving AI sector.
The formation of the AI consortium signifies a crucial step towards shaping the future of AI policies in the US. It reflects a collaborative effort between government bodies, non-profit organisations, universities, and technology companies to ensure responsible and ethical AI practices within the nation.
(Photo by Muhammad Rizki on Unsplash)
See also: UK paper highlights AI risks ahead of global Safety Summit
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.