Every conflict in human history has been accompanied by a wave of non-combatant suffering. The proliferation of militarised autonomous systems in this century is likely to continue and even increase it. Yet, they may also provide new opportunities for humanitarian aid in austere and conflict-affected environments.
Humanitarian aid groups would be well-advised to invest in their own autonomous platforms. This is an achievable way to dramatically enhance their capability to match the threat posed by militarised autonomy to civilian populations in the coming century.
Killer Robots?
Media is awash with speculative horror stories about autonomous weapons dramatically increasing the risk to civilians – or even deliberately hunting them. Autonomy and its impact present challenges for civilians in war which deserves consideration.
First, if armies are predominately made up of autonomous weapons in the future, civilians will have far fewer opportunities to ask for help. A refugee can approach a human unit and – depending on a host of contextual factors – potentially find shelter. An AI-controlled gun on tracks may not recognise such a request for assistance. As technology develops it is plausible that humans may take on a supporting role, with autonomous weapons displacing human presence at the frontlines. If so, this will significantly influence the survival strategies for civilians in armed conflict.
Second, as outlined in my previous article, the proliferation of autonomous weapons is likely to further erode the already dubious protection of ‘safe areas’. In an environment of well-hidden machines manoeuvring against each other, civilians (especially internally displaced persons) are unlikely to find lasting safety. This is also likely to complicate relief efforts.
Third, it is plausible that civilians would be at risk from autonomous weapons. Some states may program their platforms to deliberately kill anyone identified as members of a certain group or within a given area. Unlike with human soldiers, there may be less opportunities to evade capture or bargain for safe passage.
Lastly, autonomous weapons have the capacity to process information and target enemies at high speed. There is likely to be pressure to reduce the time taken to verify targets – especially if the enemy abrogates that responsibility. Unshackling AI to conduct fire missions with minimal human oversight is likely to lead to increased operational efficiency at the cost of collateral damage, especially as the heuristics guiding AI decision making are likely to be opaque to humans.
This is not guaranteed though. It may prove that AI can assess risk to civilians more swiftly and accurately than humans can. But that hypothesis remains untested, and the implications if it is wrong are worrying.
A New Humanitarianism
The above section demonstrates that autonomous weapons pose new challenges to civilian wellbeing in conflict, stacked atop the considerable problems already faced by vulnerable populations. However, humanitarian actors can also harness autonomous systems to their own ends – increasing their monitoring capacity, ability to deliver important goods, and reducing the risk faced by their personnel.
The defence industry is currently the leading provider of these capabilities, and more broadly, military autonomous systems are likely to be dual-use. However, civilian organisations should not rely on innovation from militaries – they should procure technology for themselves, to ensure it is tailored to their needs and does not create potential dependencies.
First, AI analysis of imagery gathered from drone or satellite constellations may provide accurate data on movement of people and their needs. Image intelligence is a long-standing capacity held by governments. Now it is increasingly available to the public, with AI image analysis reducing its personnel and time costs. For example, the company Orbital Insight partnered with commercial satellite imagery company Planet Labs to track the growth of Xinjiang re-education camps.
As commercial drones and satellites become more common, and the AI capability matures, access to this type of data may increase. One could imagine by 2050 a humanitarian organisation able to access live information on the number and movements of refugees in an area, how much food is being trucked into local markets, the level of local water sources – along with dozens of other helpful indicators.
Second, UAV delivery could bring vital supplies to at-risk areas. Zipline is a drone delivery company which has revolutionised the movement of blood samples in Rwanda, cutting the transport time for blood transfusions from hours to minutes. Other companies are following suit, and the first steps are being taken to apply this in the humanitarian space. These drones could exploit their small size and high manoeuvrability to transport limited quantities of vital supplies into high-intensity warzones.
As the technology matures, they may also be able move larger volumes of aid – heavy lift quadcopters or airships could deliver aid directly where it is needed without requiring a conventional airport.
Last, self-driving trucks in a self-organising swarm could be used for major aid shipments. This would present two key advantages.
First, as they are autonomous, the threat to aid workers would be reduced. That is likely to promote greater risk tolerance, and hence improve the chances of assistance getting to vulnerable areas.
Second, rather than having to wait for centrally planned convoys which may not meet shifting local requirements, aid workers on the ground could request the necessary supplies. The swarm could then automatically task in the nearest vehicle to fulfil the order, as well as storing the data for future reference. This would improve the flexibility of aid logistics, to the benefit of all stakeholders.
Conclusion
Militaries are already investing in autonomous weapons. There is no plausible world in which they will stop doing so. Humanitarian organisations and advocacy groups should rise to meet the challenge by ramping up investment in and innovation towards the use of autonomous systems for humanitarian purposes. Failure to do so will only harm their mission and the civilians they serve.
About the Author: Matthew Ader is an undergraduate student at King’s College London in the Department of War Studies. He is an editor at Wavell Room, and tweets from @AderMatthew.