On May 5th, the International Institute for Strategic Studies (IISS) published an analysis titled “What does AI mean for the future of manoeuvre warfare?” authored by its research Fellow for Cyber, Space and Future Conflict Franz-Stefan Gady.
According to the researcher, Artificial intelligence (AI) is in essence a computer-based capability to execute human mental processes at superhuman speeds.
And as such, technologies that utilize AI have already been involved in military operations.
This includes: automated intelligence-processing software, based on machine-learning algorithms developed under the US Department of Defense’s (DoD) Project Maven. [pdf]
It’s been utilized for a while now in the Middle East in support of counter-terrorism operations.
The modern long-range air- and missile-defense systems, such as the Aegis defense system (be it the warship one, or the Aegis Ashore) use very low-level machine learning algorithms to defend against potential incoming threats.
To push the utilization of AI further, technologies could be used to accelerate “kill-chain” by linking sensors and shooters in an internet of things or system of systems architecture could have a profound effect on conventional offensive military operations.
“A 2019 US Army wargame concluded that an infantry platoon, reinforced by AI-enabled capabilities, can increase its offensive combat power by a factor of ten, thus significantly tipping the defensive-offensive balance in the attacker’s favour.”
AI, not only would potentially assist in defeating the enemy, but it would also greatly reduce the material and human cost to do so – and this is applicable if fighting against an enemy that doesn’t utilize AI, clearly when both sides employ such capabilities – the rules change.
Other functions that AI could carry out include, assisting commanders with enhanced intelligence, surveillance and reconnaissance (ISR), as well as battle manage systems.
An AI could quickly identify where the lynchpin of an enemy force is and quickly estimated what would be required to crush said position.
“For example, in a hypothetical future war scenario with China or Russia, US commanders, supported by AI decision aids, could promptly disable Chinese or Russian anti-access area denial (A2/AD) capabilities such as long-range sensors and precision-strike platforms with a combination of hypersonic missiles, cyber attacks or special operations forces.”
And when the anti-access area denial bubble is burst, the naval, aerial and ground forces can move in and carry out the remaining fighting, undeterred.
Most importantly, AI would facilitate decision-making, turning it into a much quicker process, in a world where a missile being shot by an airplane 1 second earlier could be decisive.
The “game of war” would essentially be shifted towards maneuver-making and less so on attrition, and wars such as the one in Afghanistan that have lasted a decade, and end in a capitulation by the US, presumably wouldn’t happen.
According to a Lawrence Livermore National Laboratory report, the nexus of any discrete AI-supported weapons system is a customised ‘software–hardware core’ specified to a given purpose.
However, there is no organizing matrix to harness an array of AI-powered systems working independently and on multiple levels. This relates to using AI-controlled swarms, that function against a potential enemy.
As the report explains, ‘AI-supported weapons, platforms, and operating systems rely on custom-built software and hardware that is specifically designed for each separate system and purpose. There is currently no master mechanism to integrate the scores of AI-powered systems operating on multiple platforms.’
Or as another analyst aptly put it, the ‘fog of war’ could be replaced by a ‘fog of systems’.
Using AI would also open up cyberspace as a battlefield, much more than any hacking or cyber-attacks currently can.
“Consequently, without strong cyber defences intrinsic to any AI-enabled capability, manoeuvre warfare will most likely not achieve the intended success in the battlespace.”
The thing is that any advantage of the AI, in any field, could be cancelled out if the enemy has an AI of its own capable of carrying out and countering the same operations, and then the entire field is set to square on and the war of attrition is back on the schedule.
“AI may increase the speed of military operations, but it could also lead to less decisive outcomes. And rather than leading to a new form of manoeuvre warfare, it may elicit a more technologically sophisticated version of attrition warfare. Consequently, military planners who envision a 21st century multi-domain manoeuvre warfare version of the 1991 Gulf War campaign, underpinned by AI-enabled capabilities, could be disappointed.”
In general, the report provides some valuable insight, which is largely un-biased, and is, expectedly, provided in the interest of the US, generally disregarding that any progress on the matter is being made by any other country. Or rather, the report assumes that the US is ahead of the curve and it is the only party on a global scale that could potentially employ it – not Russia or China, since it is likely that its Middle Eastern foes, such as Iran are still a ways off from such a capability being indigenously developed.
MORE ON THE TOPIC:
Just like 1984 was meant as a warning, not a literal script to be implemented, so were the Terminator movies.