In response to growing concerns over the pace of development, lack of transparency, as well as potentially anti-competitive behavior and potentially harmful outputs and capabilities of artificial intelligence (AI) technologies, the Biden Administration issued on Oct. 30, 2023, an expansive, 111-page Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (EO). While the EO focuses in large part on AI systems of a certain caliber that pose the highest risk – such as foundation models like ChatGPT – it will also have wide-ranging impacts on developers and deployers of AI technologies. Importantly, this EO signals that the U.S. government (USG) is committed to adopting AI technologies, as many of the provisions encourage federal agencies to use AI safely and securely and foster the domestic development of advanced AI tools and related industries.
The EO directs agencies to take internal or external action to redefine AI governance, regulation and leadership. In an attempt to incentivize and require safety and security in the development and use of AI in government and the private sector, the EO covers the intersection of AI with critical infrastructure, cybersecurity, labeling, immigration, competition, workforce, intellectual property, privacy, chip development, criminal justice and civil rights. The EO also addresses national security and defense aspects of AI, although they are largely kept separate from other applications of the technology. A national security-specific memorandum may be issued subsequently. Agencies have between 45 and 365 days to complete most directives, and stakeholder engagement will be critical to most actions. There are, however, some reforms that have fairly short timelines. For example, reporting requirements to the U.S. Department of Commerce for dual-use foundation models and large-scale compute go into effect within 90 days of the issuance of the EO.1
The implementation of the EO and the USG’s AI-related policies will be overseen by a designated White House AI Council, which will be chaired by the assistant to the president and deputy chief of staff for policy, and will include the secretaries or their designees for nearly all of cabinet agencies as well as the directors of National Intelligence, National Science Foundation (NSF), U.S. Office of Management and Budget (OMB) and Office of Science and Technology Policy (OSTP), among other representatives.2 Though many of the EO’s directives provide a specified amount of time for an agency to act, the White House will likely aim to implement as much as possible ahead of the 2024 election.
President Joe Biden’s EO carries the typical weaknesses inherent in unilateral executive branch action. Its provisions are vulnerable to court interdiction, legislative modification and abrogation by a new president. Thus, the EO is unlikely to forestall congressional efforts to develop AI legislation, including Senate Majority Leader Chuck Schumer’s (D-N.Y.) AI legislative push, which offers more certainty and may prescribe requirements on industry that are not addressed in the EO. Following the issuance of the EO, on Oct. 31, 2023, President Biden met with Majority Leader Schumer and a bipartisan group of lawmakers who are developing AI legislation to create additional momentum for congressional action.
The EO comes as AI industry leaders have publicly asked for federal regulation of their industry. The administration is not acting alone among governments. Both the European Union (EU) and China have released or begun work on regulations on AI use in their countries, and the United Kingdom (U.K.) is set to host its first AI Summit on Nov. 1-2, 2023, focused on frontier AI risks.
This Holland & Knight alert provides a summary of the Executive Order, followed by key takeaways for potentially impacted entities. These entities should closely monitor the implementation of the EO, engage with implementing agencies and work with Congress on proposed AI legislation.
Creating Policies to Ensure “Safe, Secure, and Trustworthy” AI Development and Use
The EO’s wide-ranging directives are framed by principles and priorities that build on the Administration’s previously released AI Bill of Rights and the National Institute of Standards and Technology (NIST) AI Risk Management Framework. Under the EO, federal agencies must adhere to the following eight principles and priorities in implementing the EO:
- AI must be safe and secure.
- Promoting responsible innovation, competition and collaboration will allow the United States to lead in AI and unlock the technology’s potential to solve some of society’s most difficult challenges.
- The responsible development and use of AI require a commitment to supporting American workers.
- AI policies must be consistent with the Biden Administration’s dedication to advancing equity and civil rights.
- The interests of Americans who increasingly use, interact with or purchase AI and AI-enabled products in their daily lives must be protected.
- Americans’ privacy and civil liberties must be protected as AI continues advancing.
- It is important to manage the risks from the federal government’s use of AI and increasing the government’s internal capacity to regulate, govern and support responsible use of AI to deliver better results for Americans.
- The federal government should lead the way to global societal, economic and technological progress as the United States has in previous eras of disruptive innovation and change.3
These principles and priorities are integrated throughout the EO and provide a window into how the Biden Administration is seeking to tackle its ambitious and comprehensive AI agenda. Many of the EO’s directives will depend in part on the availability of appropriations. With increased pressure to lower government spending, funding and resource issues could delay or derail the EO’s goals.
The EO also defines key AI-related terms such as “artificial intelligence” and “AI system,” which sets a floor for the types of technologies within the scope of the EO. These definitions may need to change as AI and related technologies evolve. The definitions will, nevertheless, likely be utilized in legislation and in regulations and have a lasting effect.
Standards and Reporting Requirements Applicable to the Private Sector
The EO creates standards and reporting requirements for companies developing and deploying AI systems. In July 2023, President Biden convened a gathering of technology companies with large stakes in the AI space – including Google, Anthropic, Microsoft, Amazon, Inflection, Meta and OpenAI – and obtained non-binding commitments from 15 companies for internal safety testing, prioritization of security and making public trust in AI a priority as they continue to develop and use AI. The standards and requirements established by the EO would essentially codify these commitments.
The EO directs the Commerce Department – and specifically the Director of the NIST – to, within 270 days, develop guidelines and best practices, creating de facto industry standards for entities developing and deploying AI systems. This includes developing a companion resource to NIST’s existing AI Risk Management Framework for generative AI4, developing a secure software development framework for generative AI and dual-use foundation models5, and launching an initiative to create guidance and benchmarks for evaluating and auditing AI capabilities, especially those that can cause harm (e.g., cybersecurity and biosecurity). The Secretary of Commerce is also tasked with establishing appropriate guidelines for use by AI developers in conducting AI red-teaming6 tests.
Additionally, the EO invokes the Defense Production Act7 to require 1) companies developing or demonstrating intent to develop potential dual-use foundation models to report to the USG, on an ongoing basis, information, reports or records such as the results of red-teaming tests to ensure the “continuous availability of safe, reliable, and effective AI,” and 2) companies, individuals or other organizations or entities that acquire, develop or possess a potential large-scale computing cluster to report any such acquisition, development or possession, including the existence and location of these clusters and the amount of total computing power in each cluster.8 The Secretary of Commerce is tasked with implementing the reporting regime. Until the secretary defines the technical conditions for models and computing clusters subject to this reporting requirement, the EO sets an interim technical threshold based on the quantity of computing power used to train a model or the computing capacity of a co-located computing cluster.9 Given the 90-day timeframe, the dual-use foundation models and computing clusters represent two major concerns of the USG. Presumably, in this window, the secretary will create a reporting process for applicable entities.
The use of Defense Production Act authority in this context is already proving somewhat controversial, with some commentators calling it a back-door approach to regulation using legislation intended to confer emergency powers. As such, it may face an early test in the courts or in congress.
Critical Infrastructure, Cybersecurity and Weapons Threats
To address cybersecurity concerns, the EO requires the Secretary of Commerce to propose regulation within 90 days that requires U.S. infrastructure as a service (IaaS) providers to submit a report to the secretary when a foreign person transacts with that IaaS provider to train a large AI model with potential capabilities that could be used in malicious cyber-enabled activity (i.e., a “training run”). 10 In addition, the federal government will mandate safety and security guidelines, which must incorporate the NIST AI Risk Management Framework, for critical infrastructure owners and operators. The EO also creates an Artificial Intelligence Safety and Security Advisory Committee at the U.S. Department of Homeland Security (DHS), which will include AI exports from the private sector, academia and government, to provide recommendations for improving security, resilience and incident response related to AI usage in critical infrastructure.
To capture the benefits of AI systems, the EO directs the Secretaries of Defense and Homeland Security to conduct an operational pilot project to identify, develop, test, evaluate and deploy AI capabilities to aid in the discovery of vulnerabilities in critical government software, systems and networks. The EO also requires the Secretary of Homeland Security to evaluate the potential for AI to be misused to develop and to counter chemical, biological, radiological and nuclear (CBRN) threats.
Authenticity, Provenance and Labeling
The Secretary of Commerce must develop guidance regarding the existing tools and practices for digital content authentication and synthetic content detection measures. The guidance must consider authenticated content and tracking its provenance, as well as watermarking and software solutions. The Federal Acquisition Regulatory Council may amend the Federal Acquisition Regulation (FAR) to take into account this guidance.
National Security and Immigration
The assistant to the president for national security affairs and the assistant to the president and deputy chief of staff for policy must oversee an interagency process to develop and submit to the president a National Security Memorandum on AI within 270 days.11 The memorandum must address AI used as part of the national security system, the military and intelligence agencies, and must address risks and benefits posed by AI.
Part of the EO focuses on attracting AI talent to the U.S. through immigration policies. Specifically, the EO directs the Secretaries of State and Homeland Security to streamline the processing times of visa petitions and applications for non-citizens traveling to the U.S. seeking to work, study or conduct research in AI or other critical and emerging technologies. The secretaries must also consider initiating several rulemakings, including one to establish new criteria to designate countries and skills on the Exchange Visitor Skills List for the two-year foreign residence requirement for certain J-1 non-immigrants and another rulemaking to modernize the H-1B program.12 The secretaries are also expected to establish a program to identify and attract top talent in AI at universities, research institutions and the private sector from overseas. The provisions of the EO focused on immigration – a highly charged political issue – could be met with opposition or challenges.
Workforce and Labor
The EO requires the Secretary of Energy to create a pilot program to enhance existing successful training programs for scientists, with the goal of training 500 new researchers by 2025 capable of meeting rising demand for AI talent.13 The EO also requires the Secretary of Labor to create a report on how the USG can support workers displaced by AI, as well as develop and publish principles and best practices for employers to mitigate AI’s potential harms to employees, including the use of data about workers.14 Within 365 days, the secretary must publish guidance for federal contractors regarding nondiscrimination in hiring involving AI and other technology-based hiring systems to prevent unlawful discrimination from AI used for hiring.15
Copyright and Intellectual Property
In response, in part, to the recent writers and actors strike, the EO attempts to address copyright and inventorship. The Director of the U.S. Patent and Trademark Office (USPTO) and the under secretary of commerce for intellectual property are tasked with issuing recommendations to the president on potential executive actions relating to copyright and AI.16 In addition, these officials must publish guidance within 120 days about how patent examiners and applications are addressing inventorship and the use of AI.17 Further, the EO provides the director of the USPTO a subsequent 150 days to provide information on patent eligibility for AI and other emerging technologies. This information may be critical in avoiding uncertainty and shifting standards for patent eligibility with respect to AI of the nature seen generally with computerized systems since the Alice decision in 2014.18
Healthcare
The EO encourages the U.S. Department of Health and Human Services (HHS) to collaborate with private-sector actors to support the advancement of AI-enabled tools, including for personalized immune-response profiles for patients. This will include grantmaking and other awards, including the 2024 Leading Edge Acceleration Project awards, to explore ways to improve healthcare, as well as the National Institutes of Health (NIH) Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD) program.19 The EO places special emphasis on utilizing AI systems to address healthcare challenges for underserved communities, veterans and small businesses.20 The EO also requires the Secretary of HHS to establish an AI Task Force that must develop a strategic plan that includes policies and frameworks, including potential regulatory action, on responsible deployment and use of AI and AI-enabled technologies in the health sector within 365 days.21 This guidance must cover specific areas, including the incorporation of safety, privacy and security standards into the software-development lifecycle for personally identifiable information (PII). The secretary also has 365 days to develop a strategy for regulating the use of AI or AI-enabled tools in drug-development processes.22 The EO’s healthcare initiatives are not limited to those summarized here and should be reviewed closely by stakeholders to understand risks and opportunities.
Energy and Environment
The Secretary of Energy must collaborate with private-sector organizations and members of academia to support the development of AI tools to mitigate climate change risks.23 In addition, the secretary will expand partnerships with the private sector to utilize the U.S. Department of Energy’s (DOE) computing capabilities and AI testbeds to build foundation models that support new applications in science, energy and national security, with a focus on preparedness for climate-change risks, enabling clean-energy deployment (including addressing delays in environmental permitting) and enhancing grid reliability and resilience.24 Moreover, the secretary must establish an office within the DOE to coordinate AI development across programs and the 17 National Laboratories.25
Competition
The EO urges each agency to develop policies that promote competition in AI and related technologies using existing authorities. This may include addressing risks arising from concentrated control of key inputs, taking steps to stop unlawful collusion and prevent dominant firms from disadvantaging competitors, and working to provide new opportunities for small businesses and entrepreneurs. In particular, the EO encourages the Federal Trade Commission (FTC) to consider whether to exercise the agency’s existing authorities, including its rulemaking authority under the FTC Act,26 to ensure fair competition in the AI marketplace and to ensure that consumers and workers are protected from harms that may be enabled by the use of AI.27 Exercising this authority would be consistent with FTC Chair Lina Khan’s public commitment to promoting fair competition in the AI industry. In a May 2023 essay in The New York Times, Chairwoman Khan noted that the FTC is well-equipped to handle issues associated with the rapid development of AI, including collusion, monopolization, mergers, price discrimination and unfair methods of competition.
The EO places a special emphasis on competition in the semiconductor industry, which powers AI technologies, and, separately, for small businesses. Specifically, in implementing the CHIPS Act of 2022,28 the Secretary of Commerce is required to promote competition in the semiconductor industry by including competition-increasing measures in notices of funding availability for commercial research and development facilities, which is the last remaining Notice of Funding Opportunity (NOFO) to be issued under the CHIPS Incentives Program.29 This includes measures that increase access to facility capacity for startups or small firms developing chips used to power AI technologies.
The secretary must also implement a flexible membership structure for the National Semiconductor Technology Center (NSTC) to attract stakeholders, including startups and small firms.30 The secretary must also implement mentorship programs to increase interest and participation in the semiconductor industry and increase the availability of resources to startups and small businesses, including funding for physical assets (e.g., equipment or facilities), datasets collected or shared by CHIP research and development programs, workforce programs, design and process technology, including intellectual property, and other resources.31
Equity, Civil Rights and Criminal Justice
The EO builds on other executive actions to address potential discriminatory impacts of AI in the criminal justice system and to identify best uses for AI technologies. To that end, the EO directs the attorney general to submit a report to the president that addresses the use of AI in the criminal justice system, including its use in crime forecasting32 and predictive policing.33 To promote equitable administration of public benefits, the Secretary of HHS must also publish a plan addressing the use of automated or algorithmic systems in the implementation by states and localities of public benefits and services to provide notice to recipients about the presence of such systems, conduct regular evaluations to detect unjust denials and establish processes to appeal denials to human reviewers.34 The Secretary of Agriculture has similar obligations under the EO for benefits and services programs under that jurisdiction.35
Separately, the EO encourages the Federal Housing Finance Agency (FHFA) and the Consumer Financial Protection Bureau (CFPB) to consider using their authorities to require regulated entities to use the appropriate methodologies, including AI tools, to ensure compliance with federal law, evaluate underwriting models for bias and automate valuation and appraisal processes to minimize bias.36 Additional guidance is encouraged to address the use of AI tools in making decisions about housing and other real estate transactions such as tenant screening systems and the advertising of housing, credit or other real estate transactions through digital platforms.37
Transportation, Education and Telecommunications
Within 90 days, the Secretary of Transportation must direct appropriate federal advisory committees to provide advice on the safe and responsible use of AI in transportation. A new U.S. Department of Transportation (DOT) Cross-Modal Executive Working Group will be established and will solicit input from stakeholders. In addition, the secretary must direct the Non-Traditional and Emerging Transportation Technology (NETT) Council to assess the need for information, technical assistance and guidance regarding the use of AI in transportation, as well as to support pilot projects.38 Regulatory actions may be an output of the pilot programs. The secretary must also direct the Advanced Research Projects Agency-Infrastructure (ARPA-I) to explore the transportation-related opportunities and challenges of AI – including regarding software-defined AI enhancements impacting autonomous mobility ecosystems. ARPA-I grants will be prioritized for this purpose.39 The DOT will likely solicit input on these opportunities and challenges through a request for information (RFI).
Separately, the EO requires the Secretary of Education to develop resources, policies and guidance regarding AI.40 The resources will include a new AI toolkit for education leaders implementing recommendations from the U.S. Department of Education’s AI and the Future of Teaching and Learning report, including appropriate human review of AI decisions and designing systems that align with privacy-related laws.41
Lastly, the EO encourages the Federal Communications Commission (FCC) to consider actions related to how AI will affect communications networks and consumers. This includes potential rulemaking to combat unwanted robocalls and robotexts that are facilitated or exacerbated by AI.42
Privacy
Since foundation models are trained on data – including personal and sensitive data – and can make inferences about individuals, privacy risk is a major concern. The USG collects and maintains a vast amount of personal data, which can be exacerbated by AI and unintentionally misused. The EO attempts to address these concerns by requiring the Director of the OMB to, among other things, evaluate and take steps to identify commercially available information (CAI) procured by federal agencies, particularly CAI that contains PII, and including CAI procured from data brokers, as well as procured and processed indirectly through vendors in appropriate agency inventory and reporting processes.43 This does not include CAI used for national security purposes. The director must also evaluate potential guidance to agencies on ways to mitigate privacy risks from agencies’ activities related to CAI.44 The EO also requires the Secretary of Energy to advance research, development and implementation related to the USG’s use of privacy-enhancing technologies.45
USG AI Governance and Procurement
The EO has a significant focus on enhancing the USG’s practices and procedures with respect to soliciting, developing and using AI systems. Depending on the system, AI could pose risks to the USG if the appropriate guardrails are not adopted. The EO, therefore, requires the OMB director to convene and chair an interagency council to coordinate the development and use of AI in agencies’ programs and operations.46 The director must issue guidance on the use of AI, and the guidance must the designate a chief artificial intelligence officer at each agency who is responsible for coordinating the agency’s use of AI.47 The guidance will also provide recommendations regarding external testing and safeguards for AI, watermarking or labeling from generative AI, mandatory minimum risk-management practices, independent evaluation of vendors’ claims concerning the effectiveness and risk of their AI offerings, and requirements for public report.48 NIST will issue guidelines, tools and practices to support implementation of the minimum-risk management practices. The OMB director will also develop an initial means to ensure that agency contracts for the acquisition of AI systems and services align with prescribed guidance. This process could result in new standards or requirements for federal contractors.49
In general, the EO discourages agencies from imposing broad general bans on the use of generative AI and instead puts into place safeguards. Generative AI offerings will be prioritized in the Federal Risk and Authorization Management Program (FedRAMP) authorization process, for which a framework will be developed.50 In general, there will be an increased focus – led by the administrator of general services – to facilitate access to governmentwide acquisition solutions for specific types of AI services and products, explicitly including generative AI and specialized computing infrastructure.51
Global Leadership
The EO includes provisions aimed at strengthening U.S. leadership in AI. The efforts will be led primarily by the Secretary of State and include establishing an international framework for managing the risks and harnessing the benefits of AI.52 This includes expanding and internationalizing the voluntary commitments made by 15 U.S. companies, as well as developing common regulatory and other accountability principles with foreign nations. The EO also seeks to advance global technical standards for AI development with key international partners.53 As first steps, the secretary will publish an AI in Global Development Playbook and will lead efforts with international partners to respond to critical infrastructure disruptions resulting from AI.54
Key Takeaways
This alert has identified key takeaways that warrant consideration or action by members of the growing AI or AI-enabled industries. Much of the EO requires agencies to take steps to develop guidance or regulations. Even absent a formal rulemaking process, guidance sets a standard of care that, if complied with, helps mitigate risk. The EO also has tangential impacts on other government programs and industries, including the semiconductor, pharmaceutical and education technology industries.
As stakeholders digest the EO, consider the following highlights in terms of internal AI governance and federal engagement on AI, including procurement:
- Implementation of the EO will take significant time, government resources and stakeholder engagement. It is important to develop a tracking mechanism for relevant developments and participate in the process where necessary and appropriate.
- A reporting system for dual-use foundation models and large-scale computer acquisitions goes into effect within 90 days until the Secretary of Commerce builds out a more complete reporting system.
- Identify any pilot programs, grants or other USG programs that may be of interest to your company or organization.
- Potentially impacted companies and organizations should cross-reference their AI standards and practices against what the EO has prescribed to identify gaps and plan for future federal guidance or regulation. Monitoring of updates will be important with respect to updating internal AI use policies to ensure compliance with changing regulations.
- Current and prospective government contractors who use AI technologies could be subject to new requirements under the EO and should closely follow its implementation.
As agencies begin to implement the EO, there may be opportunities for stakeholders to weigh in. For more information on the EO and its implications, please contact the authors.
Notes
1 Section 4.2.
2 Section 12.
3 Section 2.
4“Generative AI” is defined in the EO as “the class of AI models that emulate the structure and characteristics of input data in order to generate derived synthetic content. This can include images, videos, audio, text, and other digital content.” Section 3(p).
5 “Dual-use foundation model is defined in the EO as “an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters…”. Section 3(k).
6 “AI red-teaming” is defined in the EO as “structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI.” Section 3(d).
7 50 U.S.C. 4501 et seq.
8 Section 4.2.
9 Id.
10 Section 4.2(c).
11 Section 4.8.
12 Section 51.
13 Section 5.2(b).
14 Section 6.
15 Section 7.3.
16 Section 5.2(c).
17 Id.
18 Alice Corp. v. CLS Bank Int’l., 573 U.S. 208 (2014).
19 Section 5.2(e).
20 Section 5.2(f).
21 Section 8(b).
22 Section 8(b)(v).
23 Section 5.2(g).
24 Id.
25 Id.
26 15 U.S.C. 41 et seq.
27 Section 5.3(a).
28 Public Law 117-167.
29 Section 5.3(b).
30 Id.
31 Id.
32 “Crime forecasting” is defined in the EO as “the use of analytical techniques to attempt to predict future crimes or crime-related information. It can include machine-generated predictions that use algorithms to analyze large volumes of data, as well as other forecasts that are generated without machines and based on statistics, such as historical crime statistics.” Section 3(g).
33 Section 7.1(b).
34 Section 7.2(b)(i).
35 Section 7.2(b)(ii).
36 Section 7.3(b).
37 Id.
38 Section 8(c).
39 Id.
40 Section 8(d).
41 Id.
42 Section 8(e).
43 Section 9(a).
44 Id.
45 Section 9(b).
46 Section 10(a).
47 Section 10(b).
48 Id.
49 Section 10(d).
50 Section 10(f)(2).
51 Id.
52 Section 11(a)(ii).
53 Section 11(b).
54 Section 11(c)-(d).
Information contained in this alert is for the general education and knowledge of our readers. It is not designed to be, and should not be used as, the sole source of information when analyzing and resolving a legal problem, and it should not be substituted for legal advice, which relies on a specific factual analysis. Moreover, the laws of each jurisdiction are different and are constantly changing. This information is not intended to create, and receipt of it does not constitute, an attorney-client relationship. If you have specific questions regarding a particular fact situation, we urge you to consult the authors of this publication, your Holland & Knight representative or other competent legal counsel.