Mission Command Algorithms: An Evolution of Artificial Intelligence & Combat Leadership

Reading Time: 5 minutes


‘There is no reason to believe that leadership will be spared the impact of Artificial Intelligence’

Anyone in the military who has served in a war zone will tell you that leadership is preeminent and can literally mean the difference between tragedy and triumph during battle. But leadership in front line combat units takes on an almost reverent singularity all of its own, in context of abject adversity and the persistent threat of violent death or serious injury. The Australian Army enshrines leadership as the foundation of its success in often volatile, complex and ambiguous environments. So command and leadership of soldiers, including stewardship of Army’s institutions is considered a privilege and a quintessential prerequisite to advance through various levels of the organisation.

Up until now leadership has been a unique phenomenon of the human condition, but with the very real prospect of intelligent robotic systems operating in human-machine combat teams, this may be set to change in as yet unforeseen ways.

Underpinning success through strong leadership is the practice of Mission Command, which is a philosophy of command where subordinates are given clear direction, or the commander’s intent, including resources and constraints to achieve a task.  The key distinction in this philosophy is that subordinates have freedom to decide how to achieve the commander’s desired result. This approach provides flexibility and the capacity to adapt to a changing tactical situation. However, it also accepts some risk in that second order effects of a subordinate’s decisions, or lack thereof, during the pressure of close combat might realise unfavorable outcomes. So in terms of autonomous modernisation, how might the exercise of mission command in combat situations evolve in the future? The following dystopian narrative illustrates how this might unfold.

The Robotics and Autonomous Systems Strategy crafted in 2018 became a reality by 2028, so the Army’s force structure and land combat system capabilities were substantially transformed.  This included mechatronic manoeuvre doctrine developed to account for the new paradigm, where humans began to ‘lead’ sentient machines, inclusive of humanoid robots to achieve directed military objectives. Then by 2033 a Mechatronic Task Unit is preparing to deploy from a Landing Helicopter Dock to a foreign shore, following sea transit in the Amphibious Task Group.

A flight of smart reconnaissance drones deploys to ensure the beach landing site is clear of threats. The actual site is clear but a drone detects several humans on a hill five kilometers away. The drone algorithms determine they are not a threat, so reports all clear.  Then as landing craft deliver a Mechatronic Manoeuvre Team to shore the beach erupts in fountains of flame and sand as a heavy artillery bombardment strikes with deadly accuracy. The small seemingly inconsequential team on the distant hill turns out to be forward observers for a concealed gun regiment.

This fictional scenario highlights two pertinent issues in relation to mechatronic mission command. Firstly, the concept of what constitutes a threat may be open to computational interpretation by a smart machine. Whereas a human scout might have recognised the significance of a small team overlooking the landing site and reported it. Accordingly, drone’s configured for reconnaissance will require algorithms and on-board intelligence processing that are tailored to avoid catastrophic outcomes analogous to the narrative above. Secondly, the drones mission parameters were narrowly focused on security of the beach landing site, so a small team of humans many kilometers away, in the drones limited reasoning, could not physically influence the landing craft. This highlights the risks associated with deficient variable analysis and decision algorithms without human oversight.

Consequently, how much ‘freedom’ to decide should be afforded to weaponised drones and robot systems enabled with artificial intelligence (AI)?

It seems the seminal question is whether military mission command philosophy and practice can realistically be optimised for digital algorithms and source code? Or might this simply be a matter of sophisticated software engineering?

Or to reduce strategic risk, will specific directed missions with strict inflexible approaches be the way forward for mechatronic combat systems without a conscience or an appreciation of tactical patience? If so, there may also be an opportunity cost and the risk of suboptimal outcomes with a fixed approach. The answer will likely be discovered in algorithm engineering and source code development. Noting that algorithms provide the steps or digital rules and ‘actions on’ to complete tasks, while source code is the computer language used to implement algorithmic instructions. So this will be the ‘secret sauce’ of an autonomous capability and is the discipline that will require substantial research and development to ensure it is fit for purpose and secure when it is inevitably merged with combat machines.

Therefore mechatronic mission command might be configured via a universal source code for the entire autonomous force, with tailored algorithms for specific military functions. Perhaps this may be required for autonomous interoperability and unity among dissimilar systems in human-machine combat teams? However, an enterprise code for an autonomous force may seem logical, but it might be too complex to manage and more vulnerable to enterprise level cyber disruption. So a more elegant solution may be to develop bespoke source code for each class of autonomous system, using both mathematical and photonic cryptographic security keys, including dedicated digital gateways to manage a network of autonomous networks. So these will be vexing design issues that Army is likely to wrestle with in the coming years, as autonomous system decision-parameters will probably be replicated at the operational and strategic levels of war.

Moreover, it is quite sobering to contemplate that future conflicts may become a contest of war algorithms.

If this does materialise, the prospect of losing a battle to the side with the smartest and most capable machines is at hard aspect with the established notion that superior leadership and the best trained soldiers will always succeed. The corollary is that brittleness of intelligent combat systems may also lead to tactical defeat, particularly during this nascent period of first-generation AI development. But how might this evolve when third or fourth-generation AI is fielded and brittleness is engineered out entirely? Ultimately, how smart machines cope with the fog of war and the litany of wicked problems that continually present in combat could boil down to a clever computer program and advanced materials science. However, it remains to be seen how unique and complex smart machines will be configured and might operate at scale as part of an Internet of Military Things.

A synthetic algorithm-based approach to war will disrupt established military norms, so agreed standards and protocols will be critical to inform the systems thinking process.

Mission command algorithms could yet be a long way off, while the ethics and lawful use of autonomous weapon systems has already become a lexicon in its own right. Moreover, it is instructive to consider that inanimate military systems could in future wars be capable of conversing freely with their human warrior counterparts, commiserating with them or even saving their lives in close combat.  The concept of synthetic comradeship will become more likely if artificial intelligence systems are advanced to mimic the human brain and are successfully merged with humanoid biorobots or soft robotics systems. So how will this emerging technology change military leadership dynamics in a human-machine combat team? Is it possible we might also see intelligent robot systems leading a menagerie of other smart machines in achieving a human commander’s intent? It seems plausible; therefore mechatronic leadership algorithms might also emerge in an autonomous future. Who would have thought?

What is evident is the concept of AI in context of combat leadership may be the next moral challenge for society and what they expect from their military.

It appears conceivable this feature of autonomy might represent a new chapter of leadership theory and the historical practice of humans inspiring others to achieve great outcomes. Designers will have to consider whether sentient machines may learn to question human or other ‘classes of machine’ orders when their own logical source code and algorithms indicate a low probability of mission success. Will absolute power of human veto be necessary if an intelligent machine objects to an order? How will combat leaders reconcile human subordinates who develop a strong ‘affinity’ with ‘humanoid synthetic comrades’, especially knowing the robots will ultimately be expendable? Moreover, how much ‘personality’ will smart humanoid machines exhibit or will they be by necessity devoid of human-like traits? While answers to these pivotal questions are yet to materialise, what is certain is that combat leadership will endure as a success precondition as Army enters an epochal technology inflection point, akin to the arrival of tanks and aircraft on the battlefield.


About the author

Lieutenant Colonel Greg Rowlands is an infantry officer with 27 years of Army service. He is a graduate of Australian Command & Staff College and the Capability & Technology Management College. Greg has also completed an undergraduate degree and three master’s degrees from the University of New England, University of Canberra and University of New South Wales.

One thought on “Mission Command Algorithms: An Evolution of Artificial Intelligence & Combat Leadership

  1. A thought provoking piece. I think the author is right to ask questions about how similar to humans future autonomous robotic systems might be. My own view, underpinned by substantial critiques from philosophy and from a reflection on the current nature of “artificial” intelligence, is that much of the discussion and predictions are biased by an often unquestioned anthropomorphism. Answering questions about how much autonomy should we permit autonomous robotic systems to have needs to address not just those things we share in common, but the much larger list of things that make autonomous robot and humans unique.

Comments are closed.