Artificial intelligence (AI) is in essence a computer-based capability to execute human mental processes at superhuman speeds. AI-enabled technologies have already been deployed in military operations. For example, automated intelligence-processing software, based on machine-learning algorithms developed under the US Department of Defense’s (DoD) Project Maven, has been used in the Middle East to support counter-terrorism operations. Modern long-range air- and missile- defence systems, including the latest variants of the US-made Aegis combat system, are also using rudimentary machine-learning algorithms to defend against incoming ballistic- and cruise-missile threats.
AI-enabled technologies based on machine learning algorithms that accelerate the so-called ‘kill chain’ by linking sensors and shooters in an internet of things or system of systems architecture could have a profound effect on conventional offensive military operations. A 2019 US Army wargame concluded that an infantry platoon, reinforced by AI-enabled capabilities, can increase its offensive combat power by a factor of ten, thus significantly tipping the defensive-offensive balance in the attacker’s favour. Not only does this suggest that military forces deploying AI applications can outfight an adversary that deploys fewer or no AI-enabled capabilities, it also appears to indicate that AI can help reduce the materiel and human costs of offensive operations.
AI has manifold military uses during a military offensive. AI-guided intelligence, surveillance and reconnaissance (ISR) and battle-management systems that provide commanders enhanced situational awareness could facilitate the quick identification of an enemy’s ‘centre of gravity’ in the battlespace and enable forces rapidly to coordinate joint attacks. For example, in a hypothetical future war scenario with China or Russia, US commanders, supported by AI decision aids, could promptly disable Chinese or Russian anti-access area denial (A2/AD) capabilities such as long-range sensors and precision-strike platforms with a combination of hypersonic missiles, cyber attacks or special operations forces. Following the ‘bursting’ of such an A2/AD bubble, US air, ground and naval forces, supported by space and cyberspace capabilities, would be able quickly to seize, degrade or destroy the enemy’s centre of gravity. Essentially, AI-enabled capabilities could be used by US military forces to execute a 21st century variant of manoeuvre warfare or All-Domain Maneuver Warfare as the DoD refers to it.
This combat philosophy is described by the US Marine Corps as ‘a state of mind bent on shattering the enemy morally and physically by paralyzing and confounding him, by avoiding his strength, by quickly and aggressively exploiting his vulnerabilities, and by striking him in a way that will hurt him most.’ It found its most succinct expression in the 20th century in the German Wehrmacht campaigns of 1939–41 and the US-led 1991 Gulf War campaign.
Faster, but less decisive outcomes?
Twenty-first century manoeuvre warfare will depend on making faster and better decisions than one’s adversary, which in turn will depend on ‘information superiority’ and dominance in cyberspace. However, there are a number of obstacles to overcome in order for it to become a reality, at least in the near term. Firstly, the design of US military units continues to reflect an attrition-centric view of warfare, rather than a manoeuvre-centric one. This is reflected in current US military doctrine and a force posture centred around a select number of multi-mission platforms. Doctrinal development in the US armed forces is moving towards a more distributed force structure that in theory should be capable of conducting AI-enabled manoeuvre warfare. Yet this is still at an experimental stage, with the implementation of new operational doctrines such as the DoD’s Joint Warfighting Concept for All-Domain Operations, the US Army’s Multi-Domain Battle operational framework, the US Navy’s Distributed Maritime Operations, and the US Air Force’s Multi-Domain Command and Control initiative. All of these would be underpinned by AI-enabled technologies, but are still many years away from being realised.
Secondly, AI-enabled integrated cross-service ISR, defence and battle-management systems might not be attainable in the near term from a technological perspective. For example, the Defense Advanced Research Projects Agency’s OFFensive Swarm-Enabled Tactics, or OFFSET, programme envisions swarms of 250 collaborative autonomous aerial and ground systems for offensive operations in an urban environment. A successful deployment of such a swarm would depend on an AI-enabled common operational picture (COP) capable of showing military operations across all domains. However, such a Multi-Domain Command & Control (MDC2) capability is not yet a formal development programme within the DoD.
Moreover, even once successfully fielded, AI-enabled swarms could actually undermine the COP and MDC2 needed for manoeuvre warfare. According to a Lawrence Livermore National Laboratory report, the nexus of any discrete AI-supported weapons system is a customised ‘software–hardware core’ specified to a given purpose. But there is no organising matrix to harness an array of AI-powered systems working independently and on multiple levels. As the report explains, ‘AI-supported weapons, platforms, and operating systems rely on custom-built software and hardware that is specifically designed for each separate system and purpose. There is currently no master mechanism to integrate the scores of AI-powered systems operating on multiple platforms.’ Or as another analyst aptly put it, the ‘fog of war’ could be replaced by a ‘fog of systems’.
Thirdly, relying on AI-enabled capabilities will open up a new attack vector from cyberspace. If current trends continue, weapons systems and ISR capabilities will remain extremely vulnerable to cyber attacks. A 2017 study by the Pentagon’s Defense Science Board found that major US weapons systems, including non-nuclear strategic strike capabilities, remain vulnerable to cyber attacks. The report paints a concerning picture of the consequences of a well-executed attack from cyberspace, outlining that the projectiles of battle could be sabotaged or repurposed against the source power, the supply chain stalled or misdirected, and military commanders paralysed into inaction by suspect reports and misgivings about the efficacy of their directives. Consequently, without strong cyber defences intrinsic to any AI-enabled capability, manoeuvre warfare will most likely not achieve the intended success in the battlespace.
Last, it is also quite likely that AI-enabled technologies will not just increase the pace of operations, in particular by accelerating the kill chain, while simultaneously increasing force survivability in offensive operations, as well as defensive ones. The advantages of AI-enabled technologies could be evenly distributed between the attacker and its foe, thereby cancelling each other out. AI may increase the speed of military operations, but it could also lead to less decisive outcomes. And rather than leading to a new form of manoeuvre warfare, it may elicit a more technologically sophisticated version of attrition warfare. Consequently, military planners who envision a 21st century multi-domain manoeuvre warfare version of the 1991 Gulf War campaign, underpinned by AI-enabled capabilities, could be disappointed.