Tech companies will be required to share test results for their artificial intelligence systems with the US government before they are released, under an executive order issued by the White House.
The government will also set stringent testing guidelines under the order which was published on Monday, ahead of a landmark AI safety summit in the UK on 1 and 2 November.
“As we advance this agenda at home, the administration will work with allies and partners abroad on a strong international framework to govern the development and use of AI,” said the order.
The AI directives issued by the White House include:
Companies developing AI models that pose a threat to national security, economic security or health and safety must share their safety test results with the government.
The government will set guidelines for so-called red-team testing, where assessors emulate rogue actors in their test procedures.
Official guidance on watermarking AI-made content will be issued to address risk of harm from fraud and deepfakes.
New standards for biological synthesis screening will be developed to mitigate the threat of AI systems helping to create bioweapons.
The White House chief of staff, Jeff Zients, said President Joe Biden had given his staff a directive to move with urgency on the AI issue.
“We can’t move at a normal government pace,” Zients said Biden told him. “We have to move as fast, if not faster than the technology itself.”
The White House said the sharing of test results for powerful models would “ensure AI systems are safe, secure and trustworthy before companies make them public”.
Under the provisions on AI-made deepfakes, the US Department of Commerce will issue guidance to label and watermark AI-generated content to help differentiate between authentic interactions and those generated by software.
Referring to the watermarking plans, the order stated: “Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic – and set an example for the private sector and governments around the world.”
The order also covers areas such as privacy, civil rights, consumer protections and workers’ rights.
According to a White House official, the to-do lists within the order will be implemented and fulfilled over the range of 90 days to 365 days, with the safety and security items facing the earliest deadlines.
Elsewhere in the order, a national security memorandum will direct the US military and intelligence community on how to use AI safely and ethically. It also calls on Congress to pass legislation protecting Americans’ data privacy. Federal agencies will develop guidelines for evaluating privacy-preserving techniques in AI systems.
Concerns around bias are addressed with an order to provide guidance to landlords, federal benefits programmes and federal contractors to prevent AI algorithms from exacerbating discrimination. A key immediate concern about AI systems is that they inadvertently repeat underlying biases in the datasets they are trained upon. Best practice will also be developed on using AI in the justice system, in areas such as sentencing, predictive policing and parole.
The threat of disruption in the jobs market is addressed with an order to develop best practices for mitigating the harms from job displacement, by providing to “prevent employers from undercompensating workers, evaluating job applications unfairly, or impinging on workers’ ability to organise”. Government agencies will also be issued guidance on using AI including standards to protect rights and safety.
The Federal Trade Commission, a competition watchdog, will be encouraged to use its powers if there are any distortions in the AI market.
In a nod to efforts to regulate AI around the world including discussions at this week’s safety summit, the White House said it would also accelerate the development of AI standards with international partners. The White House will be represented at the summit by the vice-president, Kamala Harris.