fbpx
Politics Foreign Affairs Culture Fellows Program

Inside the Chilling World of Artificially Intelligent Drones

Sixteen years of war has brought something we never prepared for: increasingly intelligent drones in the hands of terrorists.
Drones

On the night of January 5, something took place that has never happened before. A swarm of DIY drones attacked two military installations in Syria. The 13 crudely made aircraft, which were powered by small gas engines and flew on wings fashioned from laminated Styrofoam, zeroed in on their targets: the vast Russian army base at Khmeimim and the naval base at Tartus on the Syrian coast. Bombs packed with the explosive PETN and shrapnel were secured to their wings. The radar signature of the drones was minimal and by taking advantage of a cool night, they were able to fly at low altitudes and avoid detection. It is unclear whether or not the drones were able to communicate with one another and thus behave as a swarm. What they did do was to approach targets at different angles and altitudes in an attempt to confuse Russia’s air defense systems.

The attack, which was purportedly carried out by a Syrian rebel group based near Idlib, failed. Russian forces, it is claimed, detected the drones and, through a combination of kinetic and electronic air defense systems, destroyed some of them. Others they took electronic control of and guided to the ground, leaving them largely undamaged.

According to Russian military spokesmen, the drones were equipped with barometric sensors that allowed them to climb to a preselected altitude, an automatic leveling system for their control surfaces, and precision GPS guidance that would have taken them to their pre-selected targets had they not been intercepted. In short, the drones, once launched, may have required no further input from those who launched them.

The incident was an ominous portent of what the world will soon face as governments race to develop smaller, more intelligent, and ultimately wholly autonomous drones. While states and the corporations that work for them remain in control of the most advanced military and surveillance technologies, they face the perennial problem of leakage: the inevitable diffusion of technology into the wider world. This diffusion is well underway in the fields of drones and artificial intelligence (AI). Non-state actors around the globe are using drones to spy on and attack enemy forces. This has been the case for years. However, what is new and what the attack on the Russian bases in Syria demonstrates is that non-state actors are—just like states—becoming more capable of building and using drones that have minds—albeit primitive ones—of their own.

In his prescient and timely book Future War: Preparing for the New Global Battlefield, Major General (Ret.) Robert Latiff argues that we are at a point of divergence where technologies are becoming increasingly complex while our ability and willingness to understand them and their implications is on the decline. He asks, “Will we allow the divergence to continue unabated, or will we attempt to slow it down and take stock of what we as a society are doing?” At this point, there is little evidence that governments or the societies they preside over are undertaking the kind of probing reassessment of technology that Latiff calls for. On the contrary, they’re competing to develop ever-more advanced drones and the AI that will ultimately allow them to think for themselves. At the same time, rival governments and non-state actors are increasingly able to use drones for their own purposes. In response, governments increasingly need to build and design a host of electronic and kinetic countermeasures to thwart the use of drones by non-state actors. The threat posed by drones is so difficult to overcome that even the Russians, who are at the forefront of electronic counter-measures, are using trained falcons to guard the Kremlin against the smallest drones.

A dangerous cycle has thus begun: governments and the corporations they rely on are driving the development of unmanned technologies and AI. This, in turn, will require ever-more advanced and costly countermeasures to defend the same governments against the technology that has and will leak out. In addition to setting in motion this cycle, the spread of drone technology and AI threatens to overwhelm even the most advanced countermeasures. Few technologies are so capable of lowering or eliminating the psychological, physical, and monetary costs of killing as drones, and it is this subtle yet profound effect that may pose the greatest threat. This is a battle that is in its infancy but it is one that humanity as a whole could lose.

♦♦♦

On October 2, 2016, the Islamic State in Iraq and Syria (ISIS) launched its first documented attack using an armed drone. The quadcopter descended near Peshmerga fighters and French soldiers and settled on the ground. When its targets tried to take it apart, the drone exploded, killing two Peshmerga and wounding two French. In its nine-month battle against Iraqi forces in Mosul, ISIS made frequent use of drones, both modified and of its own manufacture. While the drones did not change the outcome of the battle, ISIS’s use of them caused consternation and fear among Iraqi troops who were initially unprepared for the threat.

“Shotguns, everyone wanted shotguns,” an Iraqi commander said when asked about the weeks in late 2016 when ISIS first began using drones to drop small bombs on Iraqi soldiers. “The quads [quad-copters] are the hardest to hear, see, and hit. At night, they’re really hard to see even with night-vision equipment. One of my men, found an old marsh gun [a 10-gauge shotgun] and we used that to take down a few of them.”

In January 2017, ISIS declared that it had formed the “Unmanned Aircraft of the Mujahedeen,” a unit devoted to drones. While the group has been using drone technology for surveillance and targeting for at least two years, the October attack in Syria marked the debut of its armed drones. Throughout 2017, despite its mounting losses in Iraq and Syria, ISIS continued to make advances in how it modified, manufactured, and deployed drones.

“We watched how they got better and better at hitting us,” explained the same Iraqi commander. “First they send a drone in as a spotter, unarmed that they use to figure out where we’re most vulnerable—an ammo cache, a patio where men are cooking or relaxing. Then they send an armed drone to those coordinates, often at night or in the very early morning when the winds are calm.” ISIS claims to have killed in excess of two hundred Iraqi soldiers with its drones. While this number is exaggerated, it is certain they managed to kill at least several dozen. Most critically, ISIS has continued to devote part of its increasingly limited resources to developing its drone capacities. ISIS, just like the governments that are fighting it, realizes that drones are the future of warfare.

At the same time that groups like ISIS are devoting more and more resources to developing their drone warfare capability, governments and corporations are racing to develop countermeasures. Late in 2016 and in early 2017, soldiers in Iraq and Syria—especially Iraqi soldiers—had few options to defend themselves beyond firing their weapons into the skies. Within weeks of the first attack by ISIS using an armed drone in October 2016, countermeasures, many of which were already under development, were rushed from laboratories to the battlefield. These range from a variety of electronic “drone guns”—which cost tens of thousands of dollars—that jam drones’ ability to receive signals from their operators to shotgun shells that are loaded with a wire net designed to entrap a drone’s propellers and thus bring it to the ground.

Some of the countermeasures proved effective in the battle against ISIS. However, the fact that drones are relatively inexpensive and easily manufactured means that their loss has little impact on their operators. Groups like ISIS are already developing ways of electronically hardening their drones and adjusting their strategies to make them less susceptible to countermeasures. Non-state actors are also developing the capability to use swarms of drones. Swarming—just as in natural world—is one of the most effective ways of overwhelming an enemy’s defenses.

♦♦♦

“They say the rocks have ears,” explained a Yemeni journalist who studies and writes about al-Qaeda in the Arabian Peninsula (AQAP). “They’re sensors but they look just like rocks. Herders first noticed them. They notice anything that is out of place. Nobody seems to know exactly what they do, but they know they’re related to the drones.”

The sensors, hidden in plasticized containers designed to mimic the rocks of the areas where they are dropped, were likely part of the U.S. government’s effort to combat AQAP in Yemen. While their exact capabilities and manufacture are unknown, they are likely similar to Lockheed Martin’s Self Powered ad-Hoc Network (SPAN). The sensors, some of which are solar powered, can lie dormant for years and be programmed to activate by anything from ground vibrations to the sound signature of a specific automobile engine. Once on they can remain passive, continuing to collect information, or they can signal a drone to come and investigate or neutralize a target.

Yemen has long been a test-bed for drone technology and tactics. The first recorded assassination by a drone took place in Yemen when al-Qaeda operative Abu Ali al-Harithi was killed by a hellfire missile launched from a Predator drone on November 3, 2002. Since then, the U.S. government has continued to deploy and use a range of drones to hunt and kill those who end up on its kill lists as well as so-called targets of opportunity that happen to be in designated “kill boxes.” The exact number of individuals killed by drones in Yemen is unknown as is the number of civilians killed as a result of these attacks. What is certain is that AQAP has weathered this warfare. This is despite the fact that the U.S. government has spent billions of dollars fighting an organization that—at certain points in its history—had fewer than 50 dedicated operatives.

Being hunted and killed by drones has not only changed the way AQAP fights and organizes its forces. It has also forced it to develop a range of countermeasures, including trying to co-opt the AI that drives—at least to some degree—target selection. In a sense, AQAP’s operatives may have figured out how to game the system.

“They claim that they have let the drones kill some of their rivals,” a Yemen-based analyst explained. “They planted phones in cars that were carrying people they wanted to be eliminated and the drones got them.” While these claims cannot be verified, AQAP knows that data from phones such as voice signatures and numbers are vacuumed up by sensors and fed into the algorithms that—at least partly—help analysts decide whom to target. This data is the equivalent of digital blood spoor for the drones that are hunting them.

In a recent video released by AQAP, Ibrahim al-Qosi warns about the dangers of using cellphones. In another recent video, the emir of AQAP, Qasim al-Raymi, decried his operatives’ inability to refrain from using their phones, claiming that most of the attacks on them over the last two years have been due to the use of cell phones by its operatives.

AQAP may have figured out how—at least in a crude manner—to use the United States’ terrorist hunting algorithms for their own ends. What is certain is that they, just like ISIS, are using and developing their own drones. There are—as yet—no reports of AQAP using armed drones. However, given the organization’s expertise with explosives and the increasing availability of military-grade small drones (the UAE and Saudi Arabia are providing these to the forces they support in Yemen) on the black market, it is only a matter of time until they make their debut in Yemen or elsewhere.

In the besieged Yemeni city of Taiz where AQAP has been present for at least the last two years, off-the-shelf and modified drones have been used by all sides in the conflict for surveillance. Just as in Iraq and Syria, various groups, including AQAP, are becoming more and more adept at deploying drones with primitive but effective AI. In Taiz and in other parts of Yemen where Houthi rebels and various factions aligned with Saudi Arabia and the United Arab Emirates are fighting for control, semi-autonomous drones are being used to map enemy positions and monitor the movements of rival forces. These drones are programmed to fly to a preselected set of waypoints that, when desired, allows them to move in a grid pattern, thereby providing a comprehensive view of a particular area. These same drones can be programmed to hover over specific areas until they receive a low battery warning, which prompts them to return to their launch site or another pre-selected location. This kind of persistent and low-cost surveillance is critical—just as it is on a far larger and more precise scale for the U.S. government’s drones—for determining patterns of life for an individual or group prior to targeting them.

While there are no signs that militant groups like AQAP or ISIS are close to employing more advanced AI that would allow them to use drones to identify and target specific individuals, these groups and others will, in a short period, have access to such technology. AQAP has a clear understanding of how its operatives are being hunted by the digital footprint they leave behind. It is not much of a leap to put this same approach into action on a smaller scale. There are already devices on the open market that allow cell phones to be tracked and monitored. Non-state actors like Hezbollah—itself an early adopter of drones—are already using these systems. Face and gait recognition software and the high-pixel cameras that allow it to function are also widely available and undoubtedly already being used by well-funded non-state actors like Hezbollah. All of this technology is being refined on an almost weekly basis by a panoply of governments, private companies, and individuals, and much of it leaks out into licit and illicit markets.  

The next step is to marry this technology with smaller, faster, cheaper, and far more intelligent drones. Many states already possess insect-sized drones that carry surveillance packages rivaling those of early versions of the Predator drone. Militant groups like AQAP, ISIS, and a host of others will soon have access to lesser versions of these as the technology rapidly makes its way into the public sphere.

♦♦♦

On January 26, 2015, a quadcopter drone crashed on the White House lawn. The two-pound device, which was operated by an unidentified federal government employee, was too small to be detected by the radar installed at the White House. While this drone was reportedly being flown by the employee for recreational purposes, its ability to penetrate the airspace around one of the most secure buildings in the nation’s capital proved how vulnerable these sites are to drone-based attacks and surveillance.

Protecting high-value targets in Washington, D.C. from drones is a daunting challenge that will require spending billions of dollars on electronic and kinetic countermeasures. The same countermeasures will also need to be deployed at thousands of other critical and vulnerable sites around the country. Defending all of these installations—government, military, and civilian—is an impossible task, especially with drones’ increasing capabilities.

Apart from the stunning and multifold implications that this technology has for state security, the use of drones has had a more subtle yet profound effect on those who use them. In his book A Theory of the Drone, Grégoire Chamayou draws on the work of soldier psychologist Dave Grossman and suggests that “the closer the human target, the greater the initial resistance that needs to be overcome in order to kill it; conversely, the greater the distance, the less difficult it is to perform the act.”

Chamayou struggles to situate drones on a continuum of weapons used to hunt and kill other humans that correlates the proximity between the hunted and the hunter. The most intimate form of killing is hand-to-hand combat and the most distant is the pilot releasing his payload of bombs at thirty-thousand feet or the officer ordering a missile to be launched at some distant target. What makes drones very different and difficult to situate is that through them an operator can see his intended target’s facial expression, and, with more advanced technology, even his eyes. Yet, just like the launch officer or to a lesser degree the bomber pilot, the drone operator is out of reach. In the case of the U.S. government, he or she is likely thousands of miles away and invulnerable to harm. Yet there is a kind of one-way intimacy between the hunter and the hunted, even though it is pixilated and mediated through screens—enough to unsettle and traumatize many of the soldiers who are charged with operating the drones that hunt and kill in countries like Yemen.

AI solves this problem. In the near future—if it is not already being done—the human element will be removed all together. The software running the drone will decide who lives and who dies. There will be no place for any intimacy between the hunter and the hunted. Chamayou’s continuum collapses when it’s no longer a case of humans killing humans but of a robot and its algorithms initiating the carnage.

While it may be years before that kind of “hard” or “complex” AI—the programs that allow a machine to learn and exercise autonomy—are used by terrorists and other non-state actors, it will happen. The primitive AI that guided the drones toward the Russian bases in Syria and that allows AQAP to use off-the-shelf drones to conduct surveillance in Yemen was, just a few years ago, something that was only available to states. Now it is available to anyone with a few hundred dollars.

The availability of this technology comes at a time when militant groups like ISIS and AQAP are calling for—and supporting—what they call “lone wolf” attacks on targets in the West. While these groups have few qualms about killing those they deem to be infidels, at the level of the individual operative there is almost always doubt, anxiety, fear, and even guilt. This is especially the case with those who agree to use their bodies as bombs. Such fear and anxiety often cause individuals to second-guess their handlers, their mission, and even their faith in whatever ideology is driving them. Some abandon their missions altogether and others make mistakes. However, just as AI-powered drones remove the problematic and fragile human element for states’ war-making, they will also remove it for terrorist groups. As Chamayou points out in his book, “the danger is not that robots begin to disobey. Quite the reverse: it is that they never disobey.”

A member of a terrorist organization like ISIS could thus launch a “fly and forget” drone on a mission to release a bomb or chemical agent without mistake-inducing fear or anxiety. The operative is entirely removed from the act of killing and that makes it far easier to carry out. He will be essentially invulnerable. He can just as easily send an electronic command from five thousand miles away that sends a drone or swarm of drones flying, driving, or crawling toward their target.

Advanced AI paired with drone technology has the potential to overcome even the most effective countermeasures because it dramatically lowers and even eliminates the psychological, physical, and monetary costs of killing. In time, such technology will threaten to turn the whole world into a battlefield that is ripe with targets of all kinds.

♦♦♦

Drones thus have the potential to usher in a golden age for terrorists and militants. Yuval Noah Harari, author of Sapiens and Homo Deus: A Brief History of Tomorrow, argues that terrorism is a show or spectacle that captures the imagination and provokes states into overreacting. He suggests that “this overreaction to terrorism poses a far greater threat to our security than the terrorists themselves.” The attacks on 9/11 successfully provoked the U.S. government into starting its war on terror, now in its seventeenth year. In addition to invading Afghanistan and Iraq, the unintended consequences of which continue to reverberate, the war on terror has driven the rapid development of drones and AI. In many respects, drones are the perfect tools for states. They offer deniability, there are no images of flag-draped coffins, they do not get PTSD, they do not question orders, and they never entertain doubts about what their algorithms tell them to do. Unfortunately for all of us, they’re the perfect tool for terrorists and militants who are less constrained by political agendas, bureaucratic structures, and, to some degree, ethical considerations than states are. These groups are also often far more creative and innovative for the same reasons. This deadly creativity combined with ever more advanced drones will, in time, overwhelm the countermeasures that states are developing and deploying.

In Robert Taber’s timeless book, War of the Flea, he uses the analogy of the dog and its fleas. The state and its military forces are the dog and the guerrilla forces are the fleas attacking it. The dog is of course far bigger and more powerful than the fleas, but it can do little because its enemies are too small and too fast, while it is too big, too slow, and has too much territory to defend. The fleas exhaust it and suck it dry. Today, thanks to drones, the fleas could easily be drones operated by guerrillas, terrorists, or a host of other non-state actors. They could provoke the dog, exhaust it, and kill it without ever being in any physical danger from its snapping jaws.

Major General Latiff’s call for governments to slow down the development of this technology and assess the consequences of its inevitable leakage into the public sphere should be heeded if we are to avert the kind of outcome foreseen by the AI experts at the Campaign to Stop Killer Robots. In November 2017, the campaign released a short dramatic film entitled Slaughterbots that clearly shows where the technology is headed and how it can be used by terrorists and states. The “slaughterbots” are small AI empowered drones that fly in swarms, identify their targets with facial recognition software, and eliminate them. All of the technology featured in the film is in development and much of it has already been developed—it is just a matter of refining and perfecting it.

The leakage of these technologies already means that states no longer have a monopoly on killing by remote control. AI will ensure that a remote control is no longer even required. There will be more attacks like the one on the Russian base, and as the drones get smaller and more intelligent, they’ll start to look more and more like those slaughterbots. Without a dramatic reassessment of why and how these technologies are being developed and used, we could all end up inhabiting one big global “kill box.”

Michael Horton is a senior analyst for Arabian affairs at the Jamestown Foundation. He is a frequent contributor to Jane’s Intelligence Review and has written for numerous other publications including The National Interest, The Economist, and West Point’s CTC Sentinel.

Advertisement