top of page
  • Daniel Ben-Shaul

Attack of the drones: New age warfare

It’s 2013, and President Barack Obama is giving his first major counterterrorism speech after being re-elected. In now well-rehearsed post-9/11 security rhetoric, the then president is defending his administration’s use of autonomous weapons, more commonly known as drones. “We know a price must be paid for freedom”- President Obama asserts. He continues – “but the use of drones is heavily constrained [and] actions bound by consultations”. Six years on, Obama’s words continue to echo hollow. He left office with hundreds of innocent civilian victims of drone strikes in his wake, as well as leaving unanswered questions about the ethical and legal repercussions of covert drone warfare.

While administrations have changed hands, the US, and the global community, have only continued to rapidly proliferate their use of drone technology in both a militarised and intelligence based setting. Technological innovations have historically always led to iterated revolutions in military affairs, but the advent of militarised drones, equipped with Artificial Intelligence technology is undoubtedly the newest frontier for this modern-day warfare.

Conventional militarised drones already make for economically sound warfare. Allowing their operator to covertly hover over targets for hours, at a fraction of the cost of alternatives, drones are good business for governments. Those equipped with Artificial Intelligence technology take this a step further. While conventional drones transmit their feeds back to human operators, who then decide which course of action to take, drones with Artificial Intelligence technology allow for the operator to be removed from the equation. Instead, algorithms, based on a historical data-bank and analysis of past decisions, could instantly decide how to react in conflict scenarios. Rather than waiting for human feedback, they could use their technology to distinguish between friend and foe and make a calculated decision on what action to take on the battlefield. For those fearful of this technology, this would allow for an army of autonomous killer drones to be made in the future, who could operate without any human input at all.

While all this sounds futuristic, the autonomous drone of tomorrow is already here today. Indeed, in 2017, the US Department of Defence launched Project Maven – an aptly eerie sounding program which aims to use AI and machine learning to improve analysis of footage from drones. Elsewhere, the British Ministry of Defense is reported to have invested in “predictive cognitive control systems”, which would allow for deep learning neural networks to try to predict the future as it happens on the battlefield.

While these technological innovations are certainly exciting for the future of automation, they raise concerns about whether using this technology is ethical in the first place. Some defend the ethical rationale behind Artificial Intelligence drones. While human operators can suffer from fatigue or emotional stress “machines never get angry or seek revenge”, Paul Scharre, a Pentagon defence expert proclaims. Instead, they perform the mission at hand, and deliver a consistent and predictable legal outcome. While the predictability of autonomous drone strikes is certainly attainable, should predictability without emotion truly be the best indicator of successful warfare? Further, what happens and who is responsible when it all goes wrong? If erroneous data errors were to lead to a failed autonomous strike – who should face the consequences if a machine cannot be tried in a court of law?

Indeed, it is this legal grey area, and the inaction of states to legislate or come to agreement on this issue, which is of most concern. Despite the United Nations’ best attempts to restrict lethal autonomous weapons in early 2019, many states from the global south – the UK, US, Russia, Israel and Australia amongst them have so far blocked any meaningful action from being taken. By not addressing these ethical issues, governments and international organisations have kept their eye off the ball.

As the private sector has demonstrated, the positive potential of these technologies has no bounds. SeeTree, an Israeli technology start-up which uses drones equipped with AI tech to provide in-depth analytical data to optimize agricultural yields, is but one new innovative use of this technology. The medical sector too seeks to benefit from AI, with several companies seeking to utilise drones to intelligently and accurately deliver medical supplies in hard to get locations or in emergency relief zones.

Yet inaction will only stifle this private and militarised innovation and cause more issues in the future. Regulation of this sector will not be easy, especially considering the increasing ease of availability of much of this technology, but as this new age of warfare is now in the present, the tough questions must be asked now. We must ask what we want the ethical and legal boundaries of our warfare to be, and how much of this we really are willing to give away for the price of freedom.

6 views
bottom of page