The Third Revolution in Warfare: AI Weaponry in the Ukraine Conflict

The+Third+Revolution+in+Warfare%3A+AI+Weaponry+in+the+Ukraine+Conflict

For as long as there have been humans, there has been war. The two are inseparable, immune to the harsh effects of time, which have laid waste to countless other human practices. But war is certainly no constant phenomenon; the manner in which it is waged is constantly  changing with advancements in human technology – gone are the days when men fought with a sword in one hand and a shield in the other. 

Still, the brutal consequences of modern science and technology in war are often shocking. It sometimes seems as if one could manipulate any new discovery into some sort of weapon. In World War I for instance, the tank — which was the result of the discovery of the internal combustion engine and caterpillar track — became a much-feared armed vehicle on the battlefield. 

Recently however, a fundamentally different beast has taken over modern warfare: artificial intelligence. The new reliance on advanced modern computers has led to what we call “autonomous warfare.” It is a threat perhaps only equalled by that of the atomic weapon, and one that needs very urgent addressing, especially as evidenced by the Russia-Ukraine war’s sandbox of modern weaponry. Soon enough, the world may find itself yearning for the days when every actor on the battlefield was human –  a prospect that, only a few years ago, was utterly unfathomable.

Relatively speaking, combat drones, which are semi-autonomous and used for surveillance purposes, are nothing new; the United States used them in both World War II and the Vietnam War. In the latter case, they acted as nuisances to North Vietnamese air defense and as surveillance for bombers. They took on a very similar role in the Yom Kippur War, when Israel used swarms of US supplied drones to trigger the Egyptian air defense system. Soon, however, their relatively peaceful presence on the battlefield was repurposed. In 1982, Israel used unmanned aerial vehicles (UAVs) alongside manned planes to wipe out 86 Syrian aircrafts, demonstrating the destructive potential of combat drones. Shortly thereafter, this drone strike tactic (the use of an armed drone to eliminate a target) took root in modern military tactics. With transmitters, which conveyed video of real-time events to those controlling the UAV, and the ability to loiter, drones became even more potent on the battlefield. In the aftermath of 9/11, the United States killed six Al-Qaeda affiliated men with a hellfire missile fired from a Predator drone in Yemen

In the Russia-Ukraine conflict, the world has an unobstructed view of the newest advancements in drone technology as both sides utilize the destructive powers of UAVs. As Russia began the invasion using surveillance-purposed drones, Ukraine was not willing to simply play the role of spectator. They deployed the Bayraktar, a Turkish-made UAV, to wreak havoc on Russian convoys. Russia soon struck back with Iranian-made Shahed suicide drones, which damaged Ukraine’s energy infrastructure. Unfortunately the conflict seems destined to involve fully-autonomous drones, which, unlike human controllers, can make fatal decisions in milliseconds.

While this lethal decision-making may not have been enacted yet, the AI systems allowing it already exist. For example, the Turkish company STM developed the fully autonomous UAV “Kargu” which could “rapidly and effectively respond against stationary or mobile targets (i.e. vehicles, persons etc.) through its embedded real-time image processing capabilities and deep learning algorithms.” In fact, according to a U.N report, one of these drones attacked a target in Libya in March of 2020, making it the first of its kind to do so. STM itself has published videos of Kargu swarms, which, if used in combat, would annihilate nearly anything in their path. Despite its ostensible purpose of “anti-terrorist operations and asymmetric warfare,” these drones could easily be used by terrorists to cause serious damage. 

So, while AI weapons are not yet a reality in the Russia-Ukraine war, their entry into the conflict seems inevitable, especially given their market presence. The danger lies in their capacity for highly-targeted destruction and complete autonomy, which essentially makes them low-maintenance killing machines. Eerily similar to the Einstein-Szilard Letter warning then-president Franklin Roosevelt of the potential of nuclear weapons, is the recent Autonomous Weapons Open Letter: AI & Robotics Researchers, endorsed by the likes of Elon Musk and Stephen Hawking. It advocates for a “ban on offensive autonomous weapons,” much as there is with biological and chemical weapons, and suggests that the key question for humanity today is “whether to start a global AI arms race or to prevent it from starting.”