p” class=”inline-offer pre-churn-offer”>


  • The development of artificial intelligence technology is happening at a rapid pace.
  • That’s made it hard for Congress to regulate it, but Biden and Trump have tried by executive order.
  • A lack of AI experts in government has also made it difficult for lawmakers to regulate. 

Advertisement

The battle over AI isn’t just happening in Silicon Valley among tech giants.

It’s also happening within the halls of Congress and the White House as lawmakers try to figure out how to rein in the technology without stalling progress.

Congress hasn’t been able to pass a comprehensive set of federal laws and regulations around artificial intelligence — the majority of the restrictions around the innovative advancements have been made on the state level — leading President Joe Biden and former President Trump to fill in the gaps via executive decree, which provide little to no course to fight against bad actors in the industry that cross the line.

Why does the US not have federal AI regulation?

Passing legislation in Congress can be a painfully slow and sometimes impossible process. Bills are often quashed in committee and on the chamber floors. Many legislators will require amendments of their own to be added to the bill for them to consider supporting it, disrupting the process even more.

Advertisement

The chaos of the current session, with Republican infighting leading to the removal of former Speaker Kevin McCarthy, has made things even worse.

So far, the 118th Congress has passed just 1% of all proposed bills.

With it being increasingly difficult for Congress to pass substantive laws and establish industry regulations, presidents have used executive orders as a means of establishing precedents in groundbreaking and developing industries, such as AI.

How is the development of AI governed?

During Trump’s presidency, he issued several executive orders related to AI. In 2019 he signed into effect “Maintaining American Leadership in Artificial Intelligence,” which was an executive order aimed to establish the need for companies to prioritize the development of AI. And in 2020, he issued “Promoting the Use of Trustworthy AI in the Federal Government,” which set principles for how federal employees could safely and effectively use AI on the job.

Advertisement

Other than executive orders, Trump created the National Science & Technology Council’s “Select Committee on AI” in 2018, which continues to advise the White House on ways the federal government can promote AI growth in the US.

More than 80 bills directly or indirectly addressing AI have been introduced in the current 118th Congress alone, but none have passed and become law, leading Biden and his administration to follow Trump’s lead and set precedents using executive order.

Biden signed the executive order on “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” near the end of 2023. The 36-page directive set safety standards for AI researchers to follow, though critics say it provided little teeth for federal agencies to enforce it.

How do Trump’s and Biden’s AI policies differ?

Major AI powerhouses like Microsoft and Google have praised Biden’s efforts, but Trump promised in December 2023 that he’d overturn the executive order.

Advertisement

“When I’m reelected, I will cancel Biden’s artificial intelligence executive order and ban the use of AI to censor the speech of American citizens on day one,” Trump said.

Some conservative lobbyists and think tanks have criticized Biden’s regulations, arguing that the executive order abuses the Defense Production Act — a 1950 Korean War-era law empowering the president to unilaterally issue regulations and guidance to private companies during times of emergency — by violating the intended purpose of the act itself.

AI policy advocates don’t seem entirely convinced of that argument.

Trump and Biden’s “executive orders have contributed to a bipartisan consensus that AI ought to be trustworthy,” said Jason Green-Lowe, the Center for AI Policy’s executive director.

Advertisement

“It’s changed the culture,” he said. “You see sort of responsible scaling policies being rolled out on a voluntary basis by some of the more responsible labs, but then you have other companies that are just ignoring it, which right now is their legal right. Nobody’s required to make sure that they’re dealing with these catastrophic risks.”

How are policymakers balancing regulation and innovation?

Sam Altman speaks to Democratic Sen. Martin Heinrich during a forum on AI in the Senate.

Sen. Martin Heinrich speaks with Sam Altman, CEO of OpenAI, during a break as the Senate held an AI forum with industry leaders in Washington, DC.

Bill O’Leary/The Washington Post via Getty Images



Several AI-policy experts told Business Insider that they’re not completely against setting federal regulations on artificial intelligence as long as it won’t cripple research.

Some experts, like Rebecca Finlay, who is the CEO of a non-profit organization called Partnership on AI, said that regulations are necessary to further innovation. Finlay’s nonprofit is focused on responsibly promoting the development and regulation of AI.

“We’ve been very clear that you need to have regulation in place in order to advance innovation,” Finlay said. “Clear rules of the road allow for more companies to be more competitive in being more innovative to do the work that needs to be done if we’re really going to take the benefits of AI. One of the things that we are advocating strongly for is a level of transparency with regard to how these systems are being built and developed.”

Advertisement

She said that she doesn’t think there’s a right or wrong decision between developing open or closed-source AI tools — she said she’s seen “harms” from both types — as long as they’re both developed responsibly.

“Rather than arguing between a binary choice between open and closed, I think it’s really core that we hold all model developers and deployers accountable for ensuring that their models are developed as safely as possible,” she said.

Daniel Zhang, the senior manager for policy initiatives at the Stanford Institute for Human-Centered Artificial Intelligence, echoed Finlay’s hope that regulations don’t stifle research.

“We want to make sure the governance around open foundation models are, for the long term, beneficial for opening innovation,” Zhang said. “We don’t want to too-early restrict the development of open innovation that academia, for example, academic institutions thrive on.”

Advertisement

What are the challenges of crafting AI regulation?

Chuck schumer smiles whiles amy klobuchar whispers into his ear

The median age of the Senate is over 65 years old, and lawmakers are having a difficult time hiring AI experts to their offices, who are mostly choosing to work in the private sector.

Drew Angerer/Getty Images



One of the biggest hurdles that legislators face in regulating AI, Finlay said, is “just keeping up to the state of the science and the technology as it is developed.”

She said it’s difficult for lawmakers to draft regulations because most AI companies develop their models not in a “publicly funded research environment,” but they do so privately until they choose to share their advancements.

“The ideal solution would be to empower some kind of office or regulator to update the laws as they go forward,” Green-Lowe, from the Center for AI Policy, said,

That’s not the easiest thing to accomplish.

Advertisement

“We’re also in a moment where people are very concerned about overreach from executive power and about the proper role of bureaucracies or the civil service,” Green-Lowe said. “And so there are people in Congress who are skeptical that Congress can keep up with the changes in technology, but also skeptical that the power to do so should be delegated to an agency.

He added that failing to implement a formal way of regulating the sector would effectively let companies play by their own rules, something he and the Center for AI Policy don’t purport to be the best course of action.

Another challenge comes from AI experts and researchers choosing private sector jobs instead of ones in the government, a kind of “brain drain,” Zhang said.

“Most of the new AI Ph.D.’s that graduate in North America go to private industry,” he said, citing Stanford’s 2024 AI Index Report. “Less than 40% go to government looking to create all those AI regulations and governance structures.”

Advertisement

Where AI PhD's go after receiving their degrees

The vast majority of AI experts end up working in the private sector rather than for universities or federal governments.

Stanford’s Institute for Human-Centered AI



Lacking staffers who can fully understand the complexity of AI and its future puts more onus on an aging US Congress to regulate the far-reaching tech, a difficult task.

Zhang said there’s also a common misconception that working in government provides less access to money than working in the private sector.

“That’s not a hundred percent true,” he said. “For governments to appeal to those technical students, I think they just need to highlight the public service aspect and then give them the resources to be able to do their jobs.

In January, the Biden Administration released a “call to service” aimed at solving this problem.

Advertisement

“We are calling on AI and AI-enabling experts to join us to advance this research and ensure the next generation of AI models is safe, secure, and trustworthy,” the administration said.