Operationalising human control: Practical implications for limits on autonomous weapons

Reading Time: 5 minutes

Robotic weapon systems with autonomy in their critical functions of selecting and attacking targets raise novel legal, ethical and operational questions for states and their militaries. This is especially so for autonomous weapon systems (AWS) that use artificial intelligence (AI) and machine learning to control these functions. The International Committee of the Red Cross (ICRC) believes that human control is a necessary limit on AWS. 

Since 2014, an intergovernmental discussion has focused on challenges posed by AWS. The ICRC has been actively involved, in line with our international mandate to assist and protect victims of armed conflict. We have contributed analysis of the legalethicaltechnical and humanitarian implications, including recommendations supporting our call for states to establish limits on autonomy in weapon systems and ensure human control over the use of force.

Technological advances can affect human interaction with weapon systems and decisions to use force. Even if humans remain generally involved in the use of weapons, their decisions on the use of force can become more and more removed – in time and space – from the point at which that force is experienced. This challenges humans’ capacity to predict and understand the implications of their decisions which, in turn, raises concerns of ethical and legal consistency and operational effectiveness. The main question is: what type and degree of human control is required in practice?

Why is human control necessary?

Human control over weapons and the use of force is required legally, ethically and operationally. 

The use of any weapon in armed conflict, including AWS, must comply with international humanitarian law (IHL, also known as the law of armed conflict). This includes core legal principles such as: the obligation to distinguish between civilians and combatants (and civilian objects and military objectives); the need to take precautions in attack; the prohibition of attacks causing excessive civilian death, injury or damage (disproportionate attacks); and the prohibition of attacks or weapons that are indiscriminate. Such rules require qualitative, context-specific judgements based on the interpretation of a particular situation (the circumstances of a particular attack) rather than numbers or technical indicators. These are value judgements which must be made by humans.

Then, from an ethical perspective, there is the question of distance; the use of AWS widens the temporal, geographic and cognitive space between human decisions to use force and their consequences. There is also an argument that human agency must be retained in life-and-death decisions, to uphold moral responsibility and to recognise the human dignity of those affected. AWS challenges such active engagement with these decision-making processes.

Human control is not only required by law and ethics, it is essential for efficient military operations. Military operations demand a level of safety and reliability. Humans make sense of situations differently to algorithmic calculations. These abilities allow humans to understand previously unfamiliar contexts, changing tactics, and overarching goals and strategies. Only humans can make qualitative value judgements and think reflectively about the consequences of their actions and how they might be perceived or anticipated by the enemy.

Implications of AI and machine learning for human control

The ICRC has urged caution in the use AI and machine learning in armed conflict, particularly in AWS. AI, and especially machine learning, carry risks of unpredictability and unreliability (or safety) and lack of transparency (or explainability) and bias

To comply with IHL, soldiers must be able to limit the effects of their weapons. This is only possible if they can reasonably predict how their weapon will function in any given situation. All AWS raise some concerns about unpredictability because outputs will vary depending on the attack environment. AWS that incorporate machine learning, in particular, have been called ‘unpredictable by design’ because they essentially write their own rules and code, and can even continue to ‘learn’ and change their functioning over time.

Machine learning systems build their own rules based on data to which they are exposed– whether in training or through trial-and-error interaction with their environment. As such, their functioning is far less predictable than pre-programmed systems, and is highly dependent on quantity and quality of available data. Machine-learning systems are also especially vulnerable to ‘adversarial conditions’, such as environmental modifications designed to fool the system, or another machine-learning system which produces adversarial images or conditions (a generative adversarial network, or GAN). To further complicate matters, currently it is often impossible for the user to understand how and why the system reaches its output from a given input. In other words, the output is not explainable– this is the ‘black box’ problem.

There is also a growing body of evidence about bias in decision algorithms in the civilian domain. Such algorithms disadvantage people, for example by postcode, ethnicity or gender, affecting decisions on insurance risk, loans, job interviews, bail, custodial sentencing, airport security and predictive policing. Imagine trying to manage such bias in algorithms in armed conflicts dealing with unfamiliar cultures. Modern warfare rarely involves high concentrations of military personnel making contact on a battlefield. Instead, it is characterised by unmarked combatants moving among and hiding within the civilian population. AWS– or decision-support algorithms– are seen by some as a useful aide to target selection for such individuals or classes of people; but inherent bias is just one problematic aspect of this proposal.  

More broadly, the use of machine learning for computer vision applications highlights the fundamental semantic gap between humans and machines; machine calculations and human judgements are not equivalent. An algorithm trained with images of certain objects may be able to identify and classify those objects in a new image. But the algorithm does not understand meaning, so it can make mistakes that a human never would, like classifying an object as something completely different or including an unrelated object.

States and their militaries should approach the use of AI and machine learning, whether in weapons or decision-support, with extreme caution.

Operationalising human control 

Multiple safeguards should be built into the command and control system. These constraints stem from legal obligations and ethical considerations in the conduct of hostilities, but they also need to be factored in during the R&D, acquisition, deployment and training phases. The requirement for context-specific judgements means that the main focus must be user control at the point of use, including during a specific attack. Human input during earlier phases of the decision-making process cannot substitute for this control in use. To facilitate such control, autonomous systems may need to be designed and used at human speed, rather than accelerating decisions to a machine speed beyond human intervention. And to exercise control, the user must also have a sufficient understanding of both the AWS itself and its use environment– considerations which militaries should account for in training.

What types of controls on AWS are needed?

June 2020 joint report co-published by the ICRC and the Stockholm International Peace Research Institute (SIPRI) outlined three types of controls on AWS, a combination of which is needed to ensure human control over the use of force:

  1. Controls on the weapon system’s parameters of use, including measures that: restrict the type of target and the task; place temporal and spatial limits on its operation; constrain the effects of the AWS; and allow for deactivation and fail-safe mechanisms. 
  • Controls on the environment, namely measures that control or structure the environment in which the AWS is used. For example, only using AWS in environments where civilians and civilian objects are not present, or excluding their presence for the duration of the operation. 
  • Controls through human–machine interaction, such as measures that allow the user to supervise the AWS and to intervene in its operation where necessary, and those that ensure predictable and transparent functioning.

Concluding thoughts

Human control and judgement over weapons and the use of force in armed conflict is legally, ethically and operationally required. This calls for limits on AWS, specifically: safeguards or controls on weapon parameters, environment of use, and human-machine interaction. States and militaries must also consider the additional concerns and challenges where AI and machine learning is used in AWS, particularly unpredictability, unreliability, lack of transparency and bias. Ultimately, users of AWS must have a sufficient understanding of both the AWS and its interaction environment, and sufficient control to make the context-specific judgements required to satisfy IHL rules and ethical considerations. 

About the Author: Emily Defina is a Legal Adviser with the International Committee of the Red Cross Regional Delegation for the Pacific, based in the Canberra Mission.

Cover image by xresch from Pixabay