Two major steps towards governmental oversight of artificial intelligence (AI) took place this week in the United States and the United Kingdom. Behind both initiatives are moves by each nation to boost their AI research capabilities, and includ efforts to broaden access to the powerful supercomputers needed to train AIs.
On 30 October, US President Joe Biden signed his nation’s first AI executive order, with a huge swath of directives for US federal agencies to guide the of use AI — and put guardrails on the technology. And on 1–2 Nov, the United Kingdom hosted a high-profile AI Safety Summit, convened by Prime Minister Rishi Sunak, with representatives from more than two dozen countries and tech companies including Microsoft and Meta. The summit, held at the famed wartime code-breaking facility Bletchley Park, produced the Bletchley Declaration, which agrees to better assess and manage the risks of powerful ‘frontier’ AI — advanced systems that could be used to develop risky technologies, such as bioweapons.
“We’re talking about AI that doesn’t yet exist — the things that are going to come out next year,” says Yoshua Bengio, an AI pioneer and scientific director of Mila, the Quebec AI Institute in Canada, who attended the summit.
Both nations have committed to develop a national AI ‘research resource’, which aim to provide AI researchers with cloud access to heavy-hitting computing power. The United Kingdom, in particular, has made a “massive investment”, says Russell Wald, who leads the policy and society initiative at the Stanford Institute for Human-Centered AI in California.
These efforts are meaningful for a branch of science that relies heavily on expensive computing infrastructure, says policy researcher Helen Toner at Georgetown University’s Center for Security and Emerging Technology in Washington DC. “A major trend in the last five years of AI research is that you can get better performance from AI systems just by scaling them up. But that’s expensive,” she says.
“Training a frontier AI system takes months and costs tens or hundreds of millions of dollars,” agrees Bengio. “In academia, this is currently impossible.” Both research resource initiatives aim to democratize these capabilities.
“It’s a good thing,” says Bengio. “Right now, all of the capabilities to work with these systems is in the hands of companies that want to make money from them. We need academics and government-funded organizations that are really working to protect the public to be able to understand these systems better.”
All the bases
Biden’s executive order is limited to guiding the work of federal agencies, because it is not a law passed by Congress. Nevertheless, says Toner, the order has a broad reach. “What you can see is the Biden administration really taking AI seriously as an all-purpose tech, and I like that. It’s good that they’re trying to cover a lot of bases.”
One important emphasis in the order, says Toner, is on creating much-needed standards and definitions in AI. “People will use words like ‘unbiased’, ‘robust’ or ‘explainable’,” to describe AI systems, says Toner. “They all sound good, but in AI, we have almost no standards for what these things really mean. That’s a huge problem.” The order calls for the National Institute of Standards and Technology to develop such standards, alongside tools (such as watermarks) and ‘red team testing’ — in which good actors try to misuse a system to test its security — to help ensure that powerful AI systems are “safe, secure and trustworthy”.
The executive order directs agencies that fund life-sciences research to establish standards to protect against using AI to engineer dangerous biological materials.
Agencies are also encouraged to help skilled immigrants with AI expertise to study, stay and work in the United States. And the National Science Foundation (NSF) must fund and launch at least one regional innovation engine that prioritizes AI-related work, and in the next 1.5 years establish at least four national AI research institutes, on top of the 25 currently funded.
Research Resources
Biden’s order commits the NSF to, within 90 days, launch a pilot of the National AI Research Resource (NAIRR) — the proposed system to enable access to powerful, AI-capable computing power through the cloud. “There’s a fair amount of excitement about this,” says Toner.
“It’s something we’ve been championing for years. This is recognition at the highest level that there’s need for this,” says Wald.
In 2021, Wald and colleagues at Stanford published a white paper with a blueprint of what such a service might look like. In January, a NAIRR task force report called for its budget to be $2.6 billion over an initial period of 6 years. “That’s peanuts. In my view it should be substantially larger,” says Wald. Lawmakers will have to pass the CREATE AI Act, a bill introduced in July 2023, to release funds for a full-scale NAIRR, he says. “We need Congress to step up and take this seriously, and fund and invest,” says Wald. “If they don’t, we’re leaving it to the companies.”
Similarly, the UK plans for a national AI Research Resource (AIRR) to provide supercomputer-level computing power to diverse researchers keen on studying frontier AI.
The UK goverment announced plans for the UK AIRR were in March. At the summit, the government said that it would triple an AIRR funding pot from £100 million (US$124 million) to £300 million, as part of a previous £900-million investment to transform UK computing capacity. Given its population and gross domestic product, the UK investment is much more substantial than the US proposal, says Wald.
The plan is backed by two new supercomputers: Dawn in Cambridge, which aims to be running in the next two months; and the Isambard-AI cluster in Bristol, which is expected to come online next summer.
Isambard-AI will be one of the world’s top-5 AI-capable supercomputers, says Simon McIntosh-Smith, director of the Isambard National Research Facility at the University of Bristol, UK. Alongside Dawn, he says, “these capabilities mean that UK researchers will be able to train even the largest frontier models being conceived, in a reasonable amount of time”.
Such moves are helping countries like the United Kingdom to develop the expertise needed to guide AI for the public good, says Bengio. But legislation will also be needed, he says, to safeguard against future AI systems that are smart and hard to control.
“We are on a trajectory to build systems that are extremely useful and potentially dangerous,” he says. “We already ask pharma to spend a huge chunk of their money to prove that their drugs aren’t toxic. We should do the same.”