The Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy – Lieber Institute West Point


Declaration

On 1 November, Vice President Harris announced that 32 States had endorsed the Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy (Declaration) that the United States developed. The number has grown in the days since the Vice President’s announcement.

States unveiled the Declaration on 16 February 2023, at the Summit on Responsible AI in the Military Domain in The Hague, Netherlands. The underlying rationale for the Declaration was that the United States “view[s] the need to ensure militaries use emerging technologies such as AI responsibly as a shared challenge.” The Declaration is described as,

a series of non-legally binding guidelines describing best practices for responsible use of AI in a defense context. These include ensuring that military AI systems are auditable, have explicit and well-defined uses, are subject to rigorous testing and evaluation across their lifecycle, and that high-consequence applications undergo senior-level review and are capable of being deactivated if they demonstrate unintended behavior.

The Declaration does not alter existing legal obligations of the endorsing States, nor does it add any new obligations under international law.

The Declaration is intended as the “beginning of a process” which will lead to the “develop[ment] of strong international norms of responsible behavior.” In the first quarter of 2024, the United States will convene what is envisioned as “a regular dialogue among endorsing states to further promote international support for and implementation of these responsible practices.”

This post provides background and commentary on the Declaration. One author, Shawn Steene, was involved in its development, while the other author, Chris Jenks, was not. A number of individuals, civilian, and military, working in the U.S. Departments of Defense (DoD) and State, along with allies and partners, are responsible for the Declaration and endorsements. The goal of this post is to promote the Declaration and to prompt continued discussion.

Background

Earlier this month, a U.S. fact sheet to the Declaration explained,

AI will transform militaries—from managing logistics to how they train and operate. Responsible militaries will apply AI in ways that lower risk and bolster stability; higher-quality information will be provided to decision-makers faster so they can make better decisions, which could help avoid unintended escalation.

Harnessing AI should be done with a careful, principled approach to avoid unpredictable and negative consequences. States must also develop AI consistent with their existing international legal obligations, including International Humanitarian Law.

The Political Declaration seeks to build international support for responsible norms in the military use of AI and autonomy. It consists of non-legally binding guidelines that describe best practices for responsible military use of AI and aims to promote responsible behavior and demonstrate collective leadership.

The goals of the Political Declaration include:

– States commit to strong and transparent norms that apply across military domains, regardless of a system’s functionality or scope of potential effects.

– States commit to pursue continued discussions on how military AI capabilities are developed, deployed, and used in a responsible manner, and to continue to engage the rest of the international community to promote these measures.

– The Political Declaration preserves the right to self-defense and States’ ability to responsibly develop and use AI in the military domain.

The Declaration

This section provides the text of the Declaration in italics followed by associated commentary. This commentary is intended to facilitate understanding of the Declaration as well as its effective implementation.

Introductory Paragraph

An increasing number of States are developing military AI capabilities, which may include using AI to enable autonomous functions and systems. Military use of AI can and should be ethical, responsible, and enhance international security. Military use of AI must be in compliance with applicable international law. In particular, use of AI in armed conflict must be in accord with States’ obligations under international humanitarian law, including its fundamental principles. Military use of AI capabilities needs to be accountable, including through such use during military operations within a responsible human chain of command and control. A principled approach to the military use of AI should include careful consideration of risks and benefits, and it should also minimize unintended bias and accidents. States should take appropriate measures to ensure the responsible development, deployment, and use of their military AI capabilities, including those enabling autonomous functions and systems. These measures should be implemented at relevant stages throughout the life cycle of military AI capabilities.

Commentary

The endorsing States recognize that concepts of artificial intelligence and autonomy are subject to a range of interpretations. For the purpose of this Declaration, artificial intelligence may be understood to refer to the ability of machines to perform tasks that would otherwise require human intelligence. This could include recognizing patterns, learning from experience, drawing conclusions, making predictions, or generating recommendations. An AI application could guide or change the behavior of an autonomous physical system or perform tasks that remain purely in the digital realm. Autonomy may be understood as a spectrum and to involve a system operating without further human intervention after activation.

The introductory paragraph’s use of “military AI capabilities” is not a euphemism for AI-enabled weapons or autonomous weapon system, nor is it synonymous with AI-enabled weapons or autonomous weapon systems. The term “military AI capabilities” includes weapons but also decision support systems that help defense leaders at all levels make better and more timely decisions, from the battlefield to the boardroom, as well as systems relating to everything from finance, payroll, and accounting, to the recruiting, retention, and promotion of personnel, to collection and fusion of intelligence, surveillance, and reconnaissance data.

The introductory paragraph’s reference to “applicable international law,” recognizes that States have different obligations under international law, for example, based on what treaties the State has joined. It bears repeating that the Declaration is not legally binding; it does not add to or alter the obligations of the endorsing States under international law. However, all States have obligations under the law of armed conflict, and in particular its fundamental principles, which constitute customary international law applicable to all States. The International Court of Justice has recognized the importance of fundamental principles of international humanitarian law in addressing new technologies.

Declaration Measures

The endorsing States believe that the following measures should be implemented in the development, deployment, or use of military AI capabilities, including those enabling autonomous functions and systems:

Declaration Paragraph A

A. States should ensure their military organizations adopt and implement these principles for the responsible development, deployment, and use of AI capabilities.

Commentary

Paragraph A reflects that the Declaration not only entails endorsement on a political level, but also entails work to ensure that their military and defense organizations adopt and implement in practice certain measures.

Declaration Paragraph B

B. States should take appropriate steps, such as legal reviews, to ensure that their military AI capabilities will be used consistent with their respective obligations under international law, in particular international humanitarian law. States should also consider how to use military AI capabilities to enhance their implementation of international humanitarian law and to improve the protection of civilians and civilian objects in armed conflict.

Commentary

Paragraph B emphasizes the need to take appropriate steps to ensure that their forces will use military AI capabilities in compliance with the law of armed conflict.

Legal reviews are one example, but other steps, such as mechanisms for reporting law of armed conflict violations or training on the law of armed conflict can also be among the appropriate steps. The U.S. military articulates such steps in DoD Directive 2311.01, DoD Law of War Program.

Such training could be part of the training of personnel called for in Paragraph G, so that personnel who use or approve the use of military AI capabilities sufficiently understand the capabilities and limitations of those systems in order to make appropriate context-informed judgments on their use. Similarly, Paragraph E calls for States to ensure that relevant personnel exercise appropriate care in the development, deployment, and use of military AI capabilities. “Appropriate care” would of course include adherence to the law of armed conflict when using military AI capabilities in the context of armed conflict.

In addition to these appropriate steps to ensure that military AI capabilities are used in a manner consistent with a State’s legal obligations, that State should also consider how those military AI capabilities might be used to improve the protection of civilians and civilian objects in addition to what the law of armed conflict requires.

Declaration Paragraph C

C. States should ensure that senior officials effectively and appropriately oversee the development and deployment of military AI capabilities with high-consequence applications, including, but not limited to, such weapon systems.

Commentary

Paragraph C refers to development and deployment of military AI capabilities, but not use. Use of military AI capabilities, and appropriate steps to ensure that such systems are used in a responsible and lawful manner are addressed in Paragraph B, as well as Paragraphs E and G.

Paragraph C calls for senior officials to go beyond general oversight of the defense establishment of an endorsing State to oversee the development and deployment of military AI capabilities more specifically and particularly. Examples of such practices are the senior-level reviews that the U.S. Department of Defense Directive (DODD) 3000.09, “Autonomy in Weapon Systems” requires for autonomous weapon systems, with specified exceptions, prior to formal development and again before fielding.

Declaration Paragraph D

D. States should take proactive steps to minimize unintended bias in military AI capabilities.

Commentary

Paragraph D is based on the DoD AI Ethical Principle “Equitable,” which provides that “[t]he Department will take deliberate steps to minimize unintended bias in AI capabilities.” It is very similar to the NATO Principle for Responsible Use “Bias Mitigation” which provides, “Proactive steps will be taken to minimise any unintended bias in the development and use of AI applications and in data sets.”

The qualifier “unintended” is needed here because many types of bias are possible and there are likely to be forms or types of bias that will be intended. For example, a military AI capability developed to assist in screening personnel for promotions or command opportunities may have certain biases knowingly built into it. There may be a desire to increase the promotion rate of a given career field, such as Air Defense or Military Police. Or perhaps there is a desire to increase the promotion rates of personnel with particular skillsets such as language skills or technical certifications.

The term “minimize” is used as a standard rather than “eliminate” because completely eliminating unintended bias is an unattainable standard.

Unintended biases could include, but are not limited to, race, color, national origin, religion or faith, sex or gender.

Declaration Paragraph E

E. States should ensure that relevant personnel exercise appropriate care in the development, deployment, and use of military AI capabilities, including weapon systems incorporating such capabilities.

Commentary

Paragraph E qualifies the bounds of the requirement to exercise appropriate care. The qualifier “relevant” is needed here because different sub-sets of military or defense personnel will be involved in the development of specific military AI capabilities, the deployment of those military AI capabilities, and the use of those military AI capabilities.  Not all personnel will be involved in any or all of these activities relating to the development, deployment, and use of military AI capabilities.

The DoD Responsible AI Strategy and Implementation Pathway, released in 2022, refers to the exercise of appropriate care in the AI production and acquisition lifecycle as “ensur[ing] potential Al risks are considered from the outset of an AI project, and efforts are taken to mitigate or ameliorate such risks and reduce unintended consequences, while enabling Al development at the pace the Department needs to meet the National Defense Strategy.”

Declaration Paragraph F

F. States should ensure that military AI capabilities are developed with methodologies, data sources, design procedures, and documentation that are transparent to and auditable by their relevant defense personnel.

Commentary

Paragraph F is intended to ensure that relevant defense personnel understand and can audit the methodologies, data sources, design procedures, and documentation relating to the development of military AI capabilities. It is not intended to suggest or require that personnel outside of a given State’s defense establishment be able to understand and audit those methodologies, data sources, design procedures, and documentation.

Declaration Paragraph G

G. States should ensure that personnel who use or approve the use of military AI capabilities are trained so they sufficiently understand the capabilities and limitations of those systems in order to make appropriate context-informed judgments on the use of those systems and to mitigate the risk of automation bias.

Commentary

Paragraph G reinforces that personnel who or use or approve the use of military AI capabilities (operators and/or commanders) must make context-informed judgments on the use of those military AI capabilities. This requires that those personnel have a sufficient understanding of the capabilities and limitations of the system(s) in question. Possessing such understanding will also mitigate the risk that those personnel will suffer from automation bias, which refers to a phenomenon where humans over-trust the machine(s) in question and reflexively or thoughtlessly concur with the recommendations of that system without exercising appropriate context-informed judgment about the use of that system. Paragraph G also relates to, or is mutually reinforcing with, Paragraph E.

Declaration Paragraph H

H. States should ensure that military AI capabilities have explicit, well-defined uses and that they are designed and engineered to fulfill those intended functions.

Commentary

Ensuring that military AI capabilities have explicit and well-defined uses or use-cases facilitates designing and engineering them to fulfill those intended functions. The less well-defined the use case for the given military AI capability, the greater the chance that the military AI capability in question will have functions or will generate outcomes that are not those that were intended. Paragraph H also works in conjunction with paragraph I in that those military AI capabilities with explicit and well-defined uses are more readily subject to appropriate and rigorous testing and assurance.

Declaration Paragraph I

I. States should ensure that the safety, security, and effectiveness of military AI capabilities are subject to appropriate and rigorous testing and assurance within their well-defined uses and across their entire life-cycles. For self-learning or continuously updating military AI capabilities, States should ensure that critical safety features have not been degraded, through processes such as monitoring.

Commentary

Testing and assurance should be “appropriate” and “rigorous” to account for the different degrees of risk and the disparate consequences of failures across different military AI capabilities. The more consequential that failures of a given military AI capability might be, the more rigorous the testing and assurance that should be applied to that military AI capability. If the military AI capability in question uses self-learning or “in-situ” learning, then additional safeguards should be in place to ensure that critical safety features have not been degraded and those safeguards should be proportional to the potential consequences of failures.

Declaration Paragraph J

J. States should implement appropriate safeguards to mitigate risks of failures in military AI capabilities, such as the ability to detect and avoid unintended consequences and the ability to respond, for example by disengaging or deactivating deployed systems, when such systems demonstrate unintended behavior.

Commentary

Paragraph J applies to any and all military AI capabilities, which are not limited to weapons and which may not be capabilities that are used in or are relevant to an attack. The unintended consequences may be the result of lawful conduct that was, nevertheless, unintended. Paragraph J emplaces appropriate safeguards, which could operate at various stages of the lifecycle of a system (e.g., design, development, etc.), not only when the system is being used in operations. One example of such a capability is AI-enabled monitoring and self-destruct systems on space-launch vehicles. These autonomous capabilities can react much faster and thereby provide a greater safety margin than alternate methods of manual monitoring and initiation of self-destruct sequences.

In considering what would qualify as an “appropriate safeguard,” one should consider the potential consequences of the failures being guarded against. Consequences that would be significant or serious warrant safeguards that are correspondingly significant or stringent. Consequences that are undesirable but not serious or significant warrant safeguards that are correspondingly less significant or stringent.

Commitments of Endorsing States

In order to further the objectives of this Declaration, the endorsing States will:

– implement these measures when developing, deploying, or using military AI capabilities, including those enabling autonomous functions and systems;

– make public their commitment to this Declaration and release appropriate information regarding their implementation of these measures:

– support other appropriate efforts to ensure that military AI capabilities are used responsibly and lawfully;

– pursue continued discussions among the endorsing States on how military AI capabilities are developed, deployed, and used responsibly and lawfully;

– promote the effective implementation of these measures and refine these measures or establish additional measures that the endorsing States find appropriate; and

– further engage the rest of the international community to promote these measures, including in other fora on related subjects, and without prejudice to ongoing discussions on related subjects in other fora.

Commentary

The elements of the Declaration are mutually reinforcing and are best considered holistically, as a whole, rather than in isolation from one another.

Conclusion

This Declaration is an important first step, but still only a first step. As noted in the bulletized “Commitments of the Endorsing States,” the endorsing States will meet, starting early in 2024, to begin sharing best practices and lessons learned in order to help one another improve their abilities to develop, deploy, and use military AI capabilities in a responsible and lawful manner.

***

Shawn Steene is a Senior Policy Advisor for Autonomous Weapons Policy and Directed-Energy Weapons Policy in the Office of the Under Secretary of Defense for Policy.

Chris Jenks is a Professor of Law at the SMU Dedman School of Law in Dallas, Texas. He is a fellow at the Center for Autonomy and Artificial Intelligence in Arlington, Virginia and a research fellow at the Program on the Regulation of Emerging Military Technology in Australia.

Photo credit: Unpslash


Leave a Reply

Your email address will not be published. Required fields are marked *