Imagining the Future – Global Security in the Age of Autonomous Machines

Reading Time: 16 minutes

FOREWORD

Clare O’Neill 2017

It can be a difficult and daunting activity to visualise the future operating environment when there are a myriad of different possibilities and problems at play. It can also be difficult to openly debate your ideas knowing the future may well prove you wrong. Yet robustly debating the possible character of war and warfare is exactly what we should do to drive our concepts, doctrine and training. The following essay by Alistair Dickie is a demonstration that projecting your future ideas is a worthy undertaking. Part 1 provides the context for Part 2 which is an unedited paper that imaged the future we are now living (written in 2006).


PART 1 – THE PRESENT – PRELUDE – IMAGING THE FUTURE

Alistair Dickie 2017

Half a generation ago, prior to iPhones and social media, while I was a student at Staff College, I had a few ideas. The thinking grew from a masters thesis I had produced a few years earlier on designing algorithms for swarming robots, which was then intertwined with my newly discovered interest in global security theories. At the time my ideas were viewed as a bit ‘out there’. Now, almost eleven years on, I feel the paper is perhaps even more relevant (even if some of the language is a little embarrassing).

Some important points to consider after you have read Part 2:

  • The world continues to march towards the age of autonomy. More recently other authors have described similar ideas as the Fourth Industrial Revolution. It is now clear to pretty much everyone who thinks of the future and understands technology. Kurziweil’s most recent prediction is that by 2029 your interaction with a computer (or phone) will be like how you interact with humans. See this post and look at the video interview from SXSW to understand what he is saying.
  • We are about half way from when I wrote the paper to 2029, but because of exponential technological growth we could see about 30 times as much technological improvement over the next 12 years as we have had over the last 11. Technological growth is not linear.
  • I feel the impact of our certainly uncertain technological future on global security is just as consequential as I described. Some of the idealist stuff is happening, such as calls for robot treaties, as is some of the realist stuff.
  • The 2007 DARPA urban grand challenge was successful. Given today’s reality,  it seems silly to think that it might not have been.

I urge all military professionals to think about the future, write it down, and debate what it means.


PART 2 – THE PAST – ESSAY – GLOBAL SECURITY IN THE AGE OF AUTONOMOUS MACHINES

Alistair Dickie 2006

Introduction

In 2004 the US Defense Advanced Research Project Agency (DARPA) held the first grand challenge for autonomous ground vehicles; a 142 mile desert course, across which the vehicles were to autonomously traverse with no human interaction. None of the 15 vehicles made it past 7.4 miles. The event was criticised in the science media as a debacle and some analysts believed that successful autonomous vehicle navigation was years away.[1] Undeterred, in 2005 a second grand challenge was held. Five of the 23 teams completed the 130 mile desert course, the fastest in just less than seven hours. One year of improvement in technology and engineering made all the difference. These vehicles represent the leading edge in robot environmental processing and decision making. This example is just one of the many projects scientists and engineers are working on to produce robots that can make intelligent decisions. As these projects come to fruition the world will enter the age of autonomous machines. The impact on humanity is likely to be far greater than industrialisation or the information explosion. What will this mean for the security of our world?

The aim of this paper is to introduce the reader to the impact of the age of autonomous machines on global security. It focuses on the likely change to military capability that robots will deliver, and then considers how this change in capability will affect global security through prominent international relations paradigms. To begin, the age of autonomous machines is defined by considering the nature of a technological age, the kind of machines and the degree of autonomy required.

The age of autonomous machines

During the late 18th and early 19th centuries the industrial revolution took hold in Europe. New technology, such as the steam engine, enabled industry to develop and global production soared. Later, in the late 19th and early 20th centuries, a second industrial revolution occurred with the advent of the internal combustion engine and electric power. Towards the end of the 20th century the information age describes the global growth of mechanisms to store, transport and develop information. The industrial and information age were enabled by the creation and use of revolutionary technologies. However, they are most properly marked by changes in the expectation of individuals; revelations that a revolution had occurred. The beginning of the age of autonomous machines will similarly be marked by changes in expectation. Individuals, nations, and eventually the world, will, at some stage, expect that robots can be given tasks to perform, and expect that they will be effectively completed.

It is likely that robots in the age of autonomous machines will have the physical capability of all mechanical apparatus that exist today. Improvements in mechanical engineering will improve this capability. Further, it is likely that in time the physical capabilities of any biological animal will be able to be mimicked by a machine. The age of autonomy described in this paper does not, however, require that machines be physically capable of every task. Rather, the threshold is met when machines are capable of many useful tasks. What is required is to give the machines that currently exist and those yet to be developed, a degree of autonomy such that they can perform useful autonomous tasks.

Levels of autonomy for robots may be described by three domains of complexity; the environment in which they operate, the complexity of the mission, and the level of human-machine interface required.[2] At the lower end of each of these spectrums are the robots of today. Today’s robotic machines are usually confined to a single environment, are designed for one mission, and the human-machine interface involves constant supervisor operation. Some of today’s machines are more capable in one of these domains. For example, modern Explosive Ordnance Disposal robots are designed for complex environments, but they have a limited mission profile and require a human operator. Cruise missiles on the other hand have little human-machine interaction past the orders stage, but they have a limited mission, destroy something, and operate in only one simple environment. The ‘strong autonomy’ required for the age of autonomous machines described in this paper requires complexity in each domain. Robots must be able to operate in many complex environments, undertake a variety of changing missions, and do so with very limited human machine interaction. This level of autonomy will be a product of the confluence of technological advancement in many disciplines, a revolutionary technological change.

Future robotic technology

The history of predicting the future is littered with inaccuracy and clouded judgment.[3] Despite this history, temptation to predict the advance of technology is motivated by a desire to understand, and prepare for, impending times. Throughout history scientific revolutions have created paradigm shifts in the nature of politically driven violence.[4] Similarly, the age of autonomous machines will be enabled by a scientific revolution. The revolution will occur not immediately when autonomous machines become possible, but when a revelation occurs changing the expectation of the world. What technologies need to develop and when will this happen?

The most significant of all technological change required for autonomy is an increase in computational power and the development of appropriate algorithms. This will enable machines to analyse and make decisions. Technological futurists describe the point in time, where the capability of non-human decision making and analysis exceeds that of man, as the technological singularity.[5] Most technological futurists predict the emergence of affordable machine intelligence in the 25 to 75 year timeframe.[6] Kurzweil predicts 2040 for the singularity, and makes a strong argument for the inevitability of artificial intelligence based on the law of accelerating returns.[7] Kurzweil’s predictions consider Moore’s Law, the assertion that the number of transistors on computer chips doubles every 24 months[8], as the fifth in a series of computing technologies that describe an exponential growth in computing power.  If the pace of increase in the computational power of machines continues, and there is no indication of any significant slow down, then Kurzweil suggests that by about 2020 a single standard desktop computer will have the computational speed of a human brain. By about 2030 a machine of the same cost will have the computational speed of all of the brains of all of the humans on earth. Whatever the eventual date, many scientists agree that the hitherto elusive delivery of autonomous analysis and decision making by machines will occur in the first half of the 21st century.

Many other technologies will contribute to the age of autonomy. High bandwidth communications will allow machines to communicate with each other and their human masters. Although, with increased decision making capabilities machines have a lessor requirement to communicate as there is a wider spectrum of actions they can perform without direction. Advanced materials such as Electro Active Polymers will deliver actuation devices that enable a new range of bio-mimetic and power efficient robots.[9] Power supplies based on advanced battery technology, fuel cells and micro turbines, delivering energy and power densities at multiples of those available today, will be available in the next few decades. Complex sensors based on increases in electro-optical resolution, micro-electromechanical machines, and novel materials will allow robots to sense and process their environment with increased fidelity. All of these technologies are being integrated today to produce the autonomous machines of the future.

Changes to military capability

The range of military platforms likely to emerge as the age of autonomous machines takes hold is difficult to comprehend. Every military platform of today will be able to be automated to an extent. Indeed, many projects are underway to automate machines currently operated by soldiers, including machines to physically replicate the soldier himself. Automation of today’s machines will just be the beginning. The next step will be to design machines without human operator constraints. Rather than conduct a detailed analysis of all potential autonomous platforms, a summary of the military effect that autonomous machines will be able to generate is presented.

Military operations may be considered to occur through three processes; sensing enemy targets, deciding how to affect them, and acting to do so. In the age of autonomous machines sensing anything that is considered a target today is likely to be able to be carried out by swarms of machines. With manned sensors each additional soldier in the area of operations requires more soldiers in support roles. In contrast, each additional autonomous sensor delivered to a target area results in an overall reduction in human cost per sensor. A sensor saturated battlespace will be normal. In a similar vein, any target effect that is generated today, and a range of new target effects that will be developed, will be able to be delivered to targets with ease. Once a target is identified, keeping track of that target and acting to deliver an effect will become more or less trivial. The task remaining for military practitioners will therefore be deciding what targets to effect. While the decision making process will be augmented by computer analysis it will remain, at least at the start of the age of autonomous machines, a fundamentally human domain.

To summarise what has been presented so far. The age of autonomous machines will begin with the revelation that there has been a technological revolution in the ability of machines to operate in complex environments, perform complex missions, and do so with reduced human interaction. Advances in a range of technologies suggest that this revolution will occur in the first half of the 21st century. Military capabilities to sense and effect targets will become all pervasive, the major task being to decide which targets to effect. With this summary it is time to turn to the second part of this paper. What does the age of autonomous machines mean for global security?

Global security in transition

Transition from the information age to the age of autonomous machines represents the most interesting period to analyse in the global security context. Not only is this period imminent, but it is also the period most likely to cause disharmony in international affairs. The world will have become comfortable with the impact and opportunities that the information age presents, and then a technological revolution in robotic technology will once again transform the global way of life.

Academic consideration of the maintenance of global security and international relations is often characterised by paradigms such as realism, liberalism, critical theory and constructivism. Global security paradigms differ most markedly by their assumptions, the objects of analysis, and the intent for which analysis is conducted.[10] All paradigms have merit in describing historic international actions, and informing the decision making process of today’s international actors. To consider the likely impact of the age of autonomous machines on global security it is possible to consider the likely impact on the assumptions behind a number of global security paradigms. First, the effect of autonomous power on realist assumptions will be considered. This is followed by consideration of the effect of autonomous capability on other factors that, according to idealist consideration, drive international relations.

The quest for power

Two basic assumptions about nation-states are fundamental to realism; the quest for power is the main driving force behind decision making, and that nation-states exist together a state of global anarchy. Some realists argue that the inherently competitive aspect of the human psyche contributes to the ongoing phenomena of international conflict. Realism incorporates other ideas such as limited cooperation and polarity between nations, and in less classical approaches, the inclusion of wider motives beyond the search for survival and power on the international stage. In all of its varied forms, however, realistic views incorporate power as a central theme. [11][12]

The age of autonomous machines will bring a new kind of power to nation-states. It is new not only because the mechanism through which power may be delivered will be revolutionary, but because it can be applied differently. ‘Autonomous power’ will be quite different to other forms of power available today. Like many other aspects of military capability autonomous power would allow international actors to deliver energy. Today’s conventional militaries, and those equipped with nuclear weapons, can deliver energy relatively effectively. However, autonomous power will give those that possess it the ability to deliver energy at much less human cost and with much more precision than either conventional or nuclear power. The mechanism through which autonomous power emerges will alter the power relationship between nations, and, therefore, according to the realist viewpoint, significantly alter the global security balance.

Of most importance will be the degree of polarity between nations as autonomous power emerges. Autonomous machine capability may upset the international balance of power. If one nation, independently of others, develops a strong autonomous capability a number of scenarios are possible. Other nations, realising that their influence and security will be challenged, may strike preemptively to negate autonomous capability with conventional capability. Perhaps more likely, is the development of an autonomous power arms race, where many nations compete for a technological edge in the autonomous realm. The emergence of previous revolutionary military technologies, such as the tank, aircraft carrier, submarine, and nuclear weapon, have in most cases spurred an arms race between major powers. The development of autonomous power will have some significant differences.

The main difference between autonomous power and today’s power will be its availability. Autonomous machine technology has many civil applications. Indeed, it is possible that civil applications of autonomous machines will mean that military autonomous power develops as an adjunct capability. In this scenario it is likely that even the poorest of nations will have access to robust autonomous power. Furthermore, other international actors will have access. Non-government organisations, multinational corporations and terrorists are all likely to be able to apply civil robotic technologies to military roles. Multinational organisations will be important in developing autonomous power, enhancing their access. The likely wide availability of humanoid robots with civil applications could be attractive to terrorist organisations for suicide bomber like missions.

Another difference is that autonomous power may be applied with much more discrimination and precision than previous power. This is particularly the case when autonomous capability is compared to nuclear capability. Those that have autonomous power will be able to deliver just the quantity of energy required to achieve an effect. Rather than destroy a bridge by dropping a bomb on it, for example, a micro robot force could be ordered to dismantle it piece by piece.

The special case of maintaining the balance of power between a nuclear nation and one that has a significant autonomous capability should be considered. A non-nuclear nation developing a strong autonomous power will upset this balance. A nuclear nation has a very powerful capability that cannot be applied with precision and is therefore unusable in many cases. Nations with a strong autonomous capability can apply precision. Initially, nuclear tactics will dominate such a balance of power as centralised autonomous machine facilities can be targeted and destroyed. However, if allowed to develop to maturity, autonomous power will dominate as the production of autonomous machines can be dispersed, and their discriminatory application means that they are more usable. Autonomous machines would generally be able to operate effectively in an environment contaminated by nuclear radiation.

In brief summary, the age of autonomous machines will beget autonomous power. This new form of power is different as it will be able to be applied with discrimination, precision, and at low human cost. An autonomous machine arms race to balance power will be able to be joined by many nations. If we follow the realist assumption concerning the centrality of power in international relations then, for many, the world has a bleak outlook. Contemplation of the assumptions behind paradigms associated with the search for peace may provide more insight.

The search for peace

Counter arguments to the international power competition ideas of realism take many forms such as liberalism and critical theory. Idealism is the basis behind many such paradigms and is often used to describe the counterpoint to realism. It asserts that human nature is inherently good and that power can be used for positive outcomes. The driving force behind international relations is, or should be according to idealists, shared interests and ideals, one of which is the shared global security interest.[13] Liberalism, for example, contends that the shared preferences, rather than shared capabilities, are the key determinants of state behavior and that conflict is an aberration. Two idealistic assumptions will be considered in more detail; that international actors will work together to control power for their collective good, and that globalisation of commerce enhances global security.

The product of idealist thought has, in the past, controlled the spread and use of power. Idealists contend that power, in and of itself, is not inherently bad. Further, most international actors will agree to controlling power in the interest of the greater collective good. By doing so the relative power of international actors is ignored. The collective security of all states is considered more important. Such a position leads to two ideas that will be examined in the context of the age of autonomous machines; the formulation of an international agreement, or treaty, to control autonomous power, and the employment of international coalitions to keep the peace.

International revelation that one or more states are developing strong autonomous power may cause the global community to demand control measures be developed. A reasonable expectation is that many states would call for international agreements or treaties, such as today’s treaties that control weapons of mass destruction and land mines. The potential success of any agreement is dubious. The nuclear non-proliferation treaty was opened for signature in 1968, over twenty years after the advent of the proliferation of nuclear weapons. Developing an international agreement is complex and takes many years. The likely rapid onset of the age of autonomy suggests that autonomous capability will proliferate prior to an international agreement being developed and ratified. A further reason for the likely failure of attempts to control military autonomous machine development is the near impossible task of monitoring. Robots for civilian purposes will be commonplace. Conversion for military purposes will be quick, cheap and easy. Given, then, that international control of the proliferation of autonomous machines is not likely to be wholly successful, the international community will resort to a more reactionary approach to negating autonomous power where appropriate.

Today international military forces are formed through the UN and through coalitions of the willing. Autonomous power affects the formation of international forces both negatively and positively. Initially, autonomous forces from different international actors are unlikely to be able to effectively work together. Different communication protocols and machines with different capabilities and configurations will mean that nations combining autonomous power to meet a common enemy will face difficulties, similar to those faced today in the sharing of information between coalition partners.[14] As the age of autonomy matures this negative aspect of coalition building is likely to improve. A more positive aspect of the age of autonomy has its foundation in purest idealist thought. The formation of a standing UN peacekeeping force becomes more possible with the age of autonomy. The international community will have the capability to generate autonomous power for the same reason as small nation states. A fully autonomous force will be cheap to produce, and will require few citizens of any particular state.

The age of autonomy will enhance the current trend to globalise international commerce. Development and production of autonomous machines, despite the ease with which this paper speaks of them, will require technological development crossing international boundaries. Multi-national corporations and research groups already feature heavily in the development of various technologies. The international effort is likely to enhance the relationship between nations and states, enhancing understanding and interdependence. Globalisation of the commercial backbone to autonomous machine development may therefore enhance global security.

Future Work

This essay has barely scratched the surface of contemporary consideration of international relations theories and how they relate to the age of autonomous machines. Further work could treat each major field of thought in much more detail. Doing so would give a more complete view of what the age of autonomy implies for global security. The next logical step would then be to analyse what the implications for global security mean for Australia’s strategic posture.

Conclusion

Revolutionary technological change will produce the military capability to deliver power autonomously, fundamentally changing the way international actors prosecute violence upon each other. Global security is threatened by the development of autonomous power. The classical realist view suggests that an international arms race to balance this new and different capability will occur. Idealistic calls to control this power through treaties are likely to fail because of the manner in which the age of autonomy will develop, and because of practical difficulties in monitoring. International coalitions to ensure peace will also be troublesome. However, a standing autonomous UN peacekeeping force, and the globalisation of nations required to create autonomy, may enhance the security of our world.

Late in 2007 DARPA is planning to hold another grand challenge, this time over a 60 mile course simulating supply missions in urban operations. To succeed the robotic vehicle must ‘autonomously obey traffic laws while merging into moving traffic, navigating traffic circles, negotiating busy intersections and avoiding obstacles.’[15] Two years on from the previous challenge some observers believe that this far more complex task cannot be done. If not this year, then at some stage before many expect, this complex task, and many others like it, will be able to be undertaken by autonomous machines. Ultimately, it is impossible to predict the future with any accuracy. It is however possible to make reasonable assumptions and upon them determine strategy.


About the author

Colonel Alistair Dickie is the Director Business Transformation in Army Headquarters. At the time of writing Global Security in the Age of Autonomous Machines Major Alistair Dickie was a student at the Australian Command and Staff College.


Endnotes
[1] Joseph Hooper, ‘DARPA’s debacle in the desert’, Popular Science, June 2004, sourced from http://www.popsci.com/popsci/darpachallenge/b05a1196aeb84010vgnvcm1000004eecbccdrcrd.html 30 Jun 06.
[2] Hui-Min Huang, Kerry Pavek, James Albus, and Elena Messina, “Autonomy Levels for Unmanned Systems (ALFUS) Framework: An Update,” Proceedings of the 2005 SPIE Defense and Security Symposium, March 2005, Orlando, Florida.
[3] Rear Admiral Bill Rowley, ‘The Future Is Not What It Used To Be ‘, Air War College, April 1995, http://www.au.af.mil/au/awc/awcgate/awc-ofut.htm.
[4] Robert F. Baumann, ‘Historical Perspectives on Future War’, Military Review, vol. 7, no. 2, March – April 1997, pp 40 – 48.
[5] Vernor Vinge, ‘The Coming Technological Singularity – How to survive the post human era’, Vision 21 Symposium, March 30 – 31 1993, sourced from http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html 30 Jun 2006.
[6] John Smart, ‘Singlarity Timing Predictions’, http://www.accelerationwatch.com/singtimingpredictions.html
[7] Raymond Kurzweil, ‘The Age of Spiritual Machines’, Viking Adult, 1999.
[8] Gordon E. Moore, ‘Cramming more components onto integrated circuits’, Electronics, Vol 38, No 8, 19 April 1965.
[9] Y. Bar-Cohen, “Biologically Inspired Technology using Electroactive Polymers (EAP),” Proceeding of the EAPAD Conference, SPIE Smart Structures and Materials Symposium, Paper #6168-02, San Diego, CA, Feb. 27 to March 2, 2006.
[10] Scott Burchill and Andrew Linklater, ‘Theories of International Relations – 3rd Edition’, Palgrave Macmillian, NewYork, 2005. pp 18 – 23.
[11] ibid, pp 29 – 44.
[12] Derek McDougall, ‘Studies in International Relations – The Asia-Pacific, The Nuclear Age, Australia – Second Edition’, Hodder Education, 1997, pp 107 – 125.
[13] Sean Kay, “Global Security in the twenty-first century’ , Rowman and Littlefield, 2006, pp 58 – 60.
[14] During OP Iraqi Freedom the sharing of information between coalition partners was considered a key lesson to be learnt.
[15] DARPA, Grand Challenge Homepage, http://www.darpa.mil/grandchallenge.

Bibliography
Bar-Cohen, Y., “Biologically Inspired Technology using Electroactive Polymers (EAP),” Proceeding of the EAPAD Conference, SPIE Smart Structures and Materials Symposium, Paper #6168-02, San Diego, CA, Feb. 27 to March 2, 2006.
Baumann, R.F., ‘Historical Perspectives on Future War’, Military Review, vol. 7, no. 2, March – April 1997, pp 40 – 48.
Burchill, S. and Linklater, A., ‘Theories of International Relations – 3rd Edition’, Palgrave Macmillian, NewYork, 2005.
Buzan, B., ‘An introduction to strategic studies – Military Technology and International Relations’, The Macmillian Press, 1987
De Landa, M., ‘War in the Age of Intelligent Machines’, Zone Books, New York, 1991
Huang, H., Pavek, K., Albus, J., and Messina,E., “Autonomy Levels for Unmanned Systems (ALFUS) Framework: An Update,” Proceedings of the 2005 SPIE Defense and Security Symposium, March 2005, Orlando, Florida.
Kay, S., “Global Security in the twenty-first century’, Rowman and Littlefield, 2006.
Kurzweil, R., ‘The Age of Spiritual Machines’, Viking Adult, 1999.
McDougall, D., ‘Studies in International Relations – The Asia-Pacific, The Nuclear Age, Australia – Second Edition’, Hodder Education, 1997
Rowley, B., ‘The Future Is Not What It Used To Be ‘, Air War College, April 1995, http://www.au.af.mil/au/awc/awcgate/awc-ofut.htm.
Smart, J., ‘Singlarity Timing Predictions’, http://www.accelerationwatch.com/singtimingpredictions.html
Vinge, V., ‘The Coming Technological Singularity’, Vision 21 Symposium, March 30 – 31 1993, (sourced from http://en.wikisource.org/wiki/The_Coming_Technological_Singularity).

2 thoughts on “Imagining the Future – Global Security in the Age of Autonomous Machines

  1. Great Article .. and great decision by Al and @Grounded Curiosity to publish M

  2. Compelling discussion by Alistair Dickie on a likely future for military forces. It begs the question are we jumping at the possibility of future technology or baulking at the risk it represents. As many militaries modernise, with force protection central to their consideration, it would appear that removing humans from the battlefield through robotics and autonomous systems is distant from military planners minds. So the question remains are we thinking about tomorrow or are we bound by today.

Comments are closed.