White House AI exec order raises questions on future of DoD innovation


Artificial intelligence concept. Brain over a circuit board. HUD future technology digital background

Artificial intelligence concept. Brain over a circuit board. HUD future technology digital background (Getty images)

WASHINGTON — A new artificial intelligence executive order signed by President Joe Biden today is being hailed by the administration as one of the “most significant actions ever taken by any government to advance the field of AI safety” in order to “ensure that America leads the way” in managing risks posed by the technology. 

But new regulations on how the commercial world develops AI could have an impact on how the Defense Department and industry collaborate moving forward, with a lot of unknown effects that will need to be worked out. 

“I think the biggest implication for DoD is how this will impact acquisition because…anybody who’s developing AI models and wanting to do business with the DoD is going to have to adhere to these new standards,” Klon Kitchen, the head of the global technology policy practice at Beacon Global Strategies, told Breaking Defense today. 

“The executive order has some pretty extensive requirements for anyone who’s developing or deploying dual-use models,” he added. “So all the major contractors and integrators and that kind of thing are going to have pretty significant reporting requirements associated with their frontier models.”

Though the text of the “Safe, Secure, and Trustworthy Artificial Intelligence” executive order has not yet been made publicly available, a fact sheet from the White House lays out its key tenets. Notably, the executive order directs “that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government” and that federal agencies will also be issued guidance for their use of AI.

Kitchen said that although there seems to be an “intended alignment” between today’s EO and DoD’s own AI policies, like the Responsible AI Strategy and Implementation Pathway, there will be “some inevitable disjunctions that will have to get worked out.”

My read is [that] the administration understands that and is trying … not to put undue burden on the industry, while at the same time trying to meaningfully address the very real concerns,” he said. “Industry and government are definitely going to disagree about where those lines should be drawn, but I do interpret the executive order as a general good faith effort to begin that conversation.”

According to the fact sheet, the National Institute of Standards and Technologies will develop standards for making sure AI is secure, and federal agencies like the Departments of Homeland Security and Energy will address the impact of AI threats to critical infrastructure. In a statement, Eric Fanning, the head of the Aerospace Industries Association trade group,  said his organization is “closely assessing” the document.

The fact sheet also says the National Security Council and White House chief of staff will develop a national security memorandum that lays out further actions related to AI and the White House will “establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software, building on the Biden-Harris Administration’s ongoing AI Cyber Challenge.”

In a statement, Sen. Mark Warner, D-Va., chairman of the Senate Select Committee on Intelligence and co-chair of the Senate Cybersecurity Caucus, said “many” of the sections in the executive order “just scratch the surface.”

“Other areas overlap pending bipartisan legislation, such as the provision related to national security use of AI, which duplicates some of the work in the past two Intel Authorization Acts related to AI governance,” Warner. “While this is a good step forward, we need additional legislative measures, and I will continue to work diligently to ensure that we prioritize security, combat bias and harmful misuse, and responsibly roll out technologies.”

In a statement, Paul Scharre, executive vice president and director of studies at the Center for a New American Security, said the requirement for companies to notify the government when training AI models and NIST’s red-teaming standards requirements are two of many “significant” steps being taken to advance AI safety.

“Together, these steps will ensure that the most powerful AI systems are rigorously tested to ensure they are safe before public deployment,” he said. “As AI labs continue to train ever-more-powerful AI systems, these are vital steps to ensure that AI development proceeds safely.”

According to Kitchen, “what’s really going to matter is how these various departments and agencies actually start building the rules and interpreting the guidance that they received in the executive order.”

“So I think the EO will provoke a lot of questions from industry, but it will be the individual agencies and departments who actually start to answer those questions,” he said. 


Leave a Reply

Your email address will not be published. Required fields are marked *