Future warfare dept. —

Responsible use of AI in the military? US publishes declaration outlining principles

12 "best practices" for using AI and autonomous systems emphasize human accountability.

A soldier being attacked by flying 1s and 0s in a green data center.

On Thursday, the US State Department issued a "Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy," calling for ethical and responsible deployment of AI in military operations among nations that develop them. The document sets out 12 best practices for the development of military AI capabilities and emphasizes human accountability.

The declaration coincides with the US taking part in an international summit on responsible use of military AI in The Hague, Netherlands. Reuters called the conference "the first of its kind." At the summit, US Under Secretary of State for Arms Control Bonnie Jenkins said, "We invite all states to join us in implementing international norms, as it pertains to military development and use of AI" and autonomous weapons.

In a preamble, the US declaration outlines that an increasing number of countries are developing military AI capabilities that may include the use of autonomous systems. This trend has raised concerns about the potential risks of using such technologies, especially when it comes to complying with international humanitarian law.

Military use of AI can and should be ethical, responsible, and enhance international security. Use of AI in armed conflict must be in accord with applicable international humanitarian law, including its fundamental principles. Military use of AI capabilities needs to be accountable, including through such use during military operations within a responsible human chain of command and control. A principled approach to the military use of AI should include careful consideration of risks and benefits, and it should also minimize unintended bias and accidents. States should take appropriate measures to ensure the responsible development, deployment, and use of their military AI capabilities, including those enabling autonomous systems.

The 12 best practices listed in the document touch on nuclear weapons safety, responsible system design, personnel training, and auditing methodologies for military AI capabilities. The document also highlights the need to minimize unintended bias and accidents and the importance of testing to ensure the safety and effectiveness of military AI capabilities.

The document contains a few notable examples of keeping accountable humans in the chain of command when it comes to autonomous systems, especially regarding nuclear weapons: "States should maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment."

It also covered unintended behavior of AI in military systems, something that has become a concern recently with consumer deep-learning systems: "States should design and engineer military AI capabilities so that they possess the ability to detect and avoid unintended consequences and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior."

The document does not specify exactly what type of autonomous or AI-powered systems are covered by the declaration, but since there is no universal definition of "artificial intelligence," it lays out its understanding of the term in a footnote. The document states, "For the purposes of this Declaration, artificial intelligence may be understood to refer to the ability of machines to perform tasks that would otherwise require human intelligence—for example, recognizing patterns, learning from experience, drawing conclusions, making predictions, or taking action—whether digitally or as the smart software behind autonomous physical systems."

Also on Thursday, more than 60 nations signed a "call to action" endorsing the responsible military use of AI. Reuters reports that human rights experts and academics noted that the statement is not legally binding and "failed to address concerns like AI-guided drones, 'slaughterbots' that could kill with no human intervention, or the risk that an AI could escalate a military conflict."

The full declaration document, created under the authority of the Bureau of Arms Control, Verification, and Compliance, is available through the US Department of State website.

Channel Ars Technica