The Promise & Peril of Autonomous Weapons: A Primer

 

Anduril (edited)

Adding autonomous decision-making to kinetic weapons systems without appropriate legal and doctrinal safeguards threatens to create the so-called "accountability gap". Nations are legally and morally obligated to close this gap before deploying lethal autonomous weapons systems (LAWS).

 

ONE9 COO, Michael M. Smith, is an expert in this field, having engaged in real-world targeting in the Office of the Judge Advocate General, as well as writing his Master of Laws (LL.M.) research on this very topic. 

 

BLUF

Every nation is obligated under the Law of Armed Conflict (LOAC) to balance military necessity (operational goals) with humanity (protection of non-combatants, etc.). A failure to implement necessary accountability frameworks is a failure to respect basic tenets of LOAC. We must never sacrifice our moral ambition at the altar of operational effectiveness, and particularly not because we fear our adversaries might do the same. To do so would be to race to the bottom.

 

Context

The allure of LAWS is understandable. Systems which "based on conclusions derived from gathered information and preprogrammed constraints, [are] capable of independently selecting and engaging targets" (see Crootof, The Killer Robots are Here: Legal and Policy Implications) are arguably better able to accomplish tasks that are dull, dirty, and dangerous, without risking human life. However, these systems also fundamentally challenge some of the foundations of LOAC since humans are now delegating decisions that were previously made by combatants in a given battlespace.

From the ongoing conflict in Ukraine, to America’s Defense Innovation Unit Replicator program, and China's continuous advancement in this space, it's clear that LAWS will be here sooner than later. But all nations are obligated under LOAC to ensure that their systems adhere to the very body of law designed to minimize suffering in war.

 
 

But what happens when the decision-maker is an autonomous system? Enter the “accountability gap”.

 
 

The Accountability Gap

Accountability within LOAC functions best when there is a "body to be kicked, and a soul to be damned". Both nations and the international community have created mechanisms to deal with users of force who breach LOAC, such as military justice systems and the International Criminal Court. For example, attacks that fail to discriminate between protected persons/things and valid military objectives, or those that are disproportionate in which the "collateral damage" outweighs the anticipated military advantage to be gained, are uses of force for which an individual is typically liable, both domestically and internationally.

But what happens when the decision-maker is an autonomous system? Enter the “accountability gap”. 

United States Air Force Secretary Frank Kendall recently posed some core questions, "It’s who do you hold accountable ... I think we’ve got to think through. Is it the person who used the weapon? Is it the designer? Is it the tester? Is it somebody in the chain of command? I think there needs to be a discussion about the mechanism by which people are held responsible for whatever weapons do when they do something that’s not allowed."

 
 

Christophe Morin/IP3

 
 

Meaningful Human Control

Legal scholars, legal officers, militaries and NGOs have been asking these very questions in earnest for at least 10 years. The best solution to date is the assertion that LAWS should only be used with “meaningful human control (MHC)”. Unfortunately, the content of this seemingly simple phrase remains difficult to articulate, let alone operationalize.

As Kendall goes on to note, “[o]ur policy is to have [MHC] of the application of force, and we’re gonna keep that. But that leaves a lot of gray space in terms of how certain are you, what’s the degree of certainty you have that that’s a threat before you commit a weapon, and what degree of competency you want to have that you’re not going to impose collateral damage and kill civilians unnecessarily.”

But these concerns must also be balanced against the ever-increasing speed of OODA loops and kill chains as more decision-making is delegated to autonomous systems. “Kendall himself has worried aloud for years that a human operator might make decisions too slowly to survive against a fully computer-controlled threat. ‘If the human being is in the loop, you will lose. You can have human supervision, you can watch over what the AI is doing. If you try to intervene, you’re going to lose.’”

It is interesting to note that Canada's Department of National Defence has staked out the position that any application of AI will be used with "appropriate human involvement", the appropriateness of which will depend on the application. Think of this as Canada's attempt to understand and apply MHC. It is interesting to note that for decisions involving lethal force, Canada says that it will always "retain the human in the loop". Stated otherwise, Canada is telling the world that, at least for now, meaningful control requires active human participation, and it is willing to sacrifice a degree of speed to ensure its control of LAWS allows for the necessary accountability. 

 

Striking the balance while resisting the race to the bottom

The balance that must necessarily be struck then is between that of maximum speed / operational effectiveness and a degree of control that could rightly allow for the human involved to be held accountable, thereby allowing for the legally compliant use of LAWS. Many, including Secretary Kendall, have voiced concerns that our adversaries will not feel similarly bound by international law. “They will field systems which are clearly about their operational effectiveness, without regard to collateral damage or inappropriate engagements. And the more stressing the operational situation is, the more inclined they’ll be to relax their constraints.”

To field systems without MHC, however, would not just be illegal, it would create an environment in which force could be used with impunity, and would upset hundreds of years of international law, both LOAC and criminal, and risk ushering in an era of warfare that will be difficult to turn back from.

 

ONE9 Operational Analysis

While it is true that our adversaries may very well feel no compunction about breaching LOAC, this should never be grounds for us to do the same. This same argument applies to all extant aspect of LOAC. It is the same reason that, despite ostensible operational advantages, western forces do not engage in perfidy, execute prisoners, or actively target civilians and protected structures. More than ever, the global west is fighting to preserve the rules-based international order, of which LOAC forms an important part. Fielding LAWs that are non-compliant simply because we fear our adversaries will do so would bring into question the very things for which we fight.

In seeking to strike the appropriate balance for MHC, lawyers, operators, and NGOs must all also remember that targeting operations are process-driven, not outcome-driven. Collateral damage estimates are just that – estimates, based on the best information available to a given decision-maker, whether human or non, at the time of the attack. Judging the outcome of attacks solely on collateral damage does not reflect the law and is unhelpful to the wider debate. This means that as we design, test, and deploy autonomous systems, we must ensure that the proper processes are in place throughout, processes that allow for human control to be truly meaningful, even in a battlespace compressed in time and space. This will result in systems that are legally compliant, reflective of our moral ambitions, and will therefore provide the necessary accountability.

Finally, operationalizing MHC within lethal autonomous decision-making will demand a truly multidisciplinary approach spanning the public/private divide during development and perhaps even more so during testing, requiring difficult ethical decisions to be made along the way. For example, some fundamental questions remain unanswered:

  • How far up the causal chain can we locate individual accountability?

  • To what standard must an autonomous system be held during validation? 

  • Must it be at least as good as an average human? If better, by how much?

No one single party can answer these questions. The answers will require input from many stakeholders across government, academia, and civil society, all of whom are rightfully entitled to have their views heard and considered.

 

ONE9 Investment Analysis

The current defence startup scene is understandably witnessing a boom in autonomous systems, many of which are, or will be, used to create kinetic effects in the battlespace. The responsibility that a given system is LOAC compliant will ultimately rest with whichever military chooses to procure and deploy it. However, in light of the fact that it is these private sector companies who are developing the software and algorithms that will enable autonomous lethal decision-making, prudent investors in this space must have access to subject matter expertise that is able to assess the policy and legal compliance of a company’s system.

Investors must also, perhaps more than ever, do their best not to get caught up in the hype around a certain tech. This is particularly so in the case of AI being applied to the lethal decision-making process. This field is truly novel and emerging, and lawyers, operators, and developers have yet to functionally crack the code for operationalizing MHC. If anyone is telling you otherwise, they are almost certainly lying.

 
 

Prudent investors in this space must have access to subject matter expertise that is able to assess the policy and legal compliance of a company’s system.

 
 

Conclusion

Delegating lethal decision-making to non-human systems raises some fundamental questions about what is means to be human and what it means to take another human life. We must collectively strive to solve the myriad problems presented by LAWS as quickly as possible. It unfortunately appears as though the proverbial frog is being boiled before our very eyes. When the lethal autonomous future arrives, we will be better off for having done our best to prepare for it.

If you'd like to go even deeper, click below to read Michael’s research.

 
Next
Next

Canada & The Great Power Competition