The Third Revolution In Warfare: Autonomous Weapons & Modern Conflict

The prospect of fully autonomous weapons systems (AWS), or ‘killer robots’ in armed conflict has once again captured the attention of scientists, organizations, and policy-makers this month, spurring renewed international discussion. Early this month, artificial intelligence (AI) experts from 30 countries announced a boycott of the South Korean university, Korea Advanced Institute of Science and Technology (KAIST) , over its partnership with “ethically dubious” defense manufacturer Hanwha Systems amid fears that the university would “accelerate the arms race” for autonomous weapons. In the US, thousands of Google employees, including senior engineers, have signed an open letter protesting the company’s involvement in the advancement of the AI capabilities of drones for the Pentagon’s ‘Project Maven.’ The letter urges Google to withdraw from the project and establish a policy stating it’s intentions to “not ever build warfare technology.”

Internal and external debate surrounding universities and companies of various countries’ increased involvement with state military has revealed stark divisions in ideology regarding the appropriate use of AI technology. Although universities in the US have a long record of conducting state authorized military research, less militarized nations, including Australia and Japan, have now begun to push their own defense-science partnerships. These efforts have been met with disapproval by many scientists and researchers who question the ethics of universities becoming involved with military research, particularly AWS development.

An increase in state-funded military research at universities around the world is linked to a perceived rise in geopolitical instability as a new arms race takes place in which states are fighting to become the first to develop fully autonomous lethal weaponry for conflict. Many states have attempted to reassure skeptics by stating their intention to not create AWS capable of attacking without meaningful human control. Protest groups such as the Campaign to Stop Killer Robots question how such promises can be regulated under existing international law, and have called for an outright ban on weapons with fully autonomous capabilities.

Until recently, the prospect of robots capable of choosing targets and deploying lethal force in a dynamic environment without direct human involvement seemed a futuristic, hypothetical scenario. It is now time for the international system to convene and consider the real-life implications of such a future and work to create a binding international treaty specifically catered to encompass the unique challenges, both ethical and practical, that the use of autonomous weapons create in warfare and foreign affairs in general.

Some aspects of AWS can be regulated under existing international law, such as the Law of War, and modeled against previous campaigns to regulate and ban certain weapons from armed conflict, such as the ban on chemical weapons. There are, however, specific, unprecedented, aspects of autonomous weaponry that will pose particular challenges to the establishment of international law. Firstly, concerns have been raised that as weapons become more autonomous and humans begin to “fade out of the decision making loop” it will blur the lines of command, which could allow armies to shirk responsibility for the actions of their autonomous weapons. This becomes an issue in particular if autonomous weapons are the cause of human rights violations. Furthermore, any restriction or ban on the development of  autonomous technology for weapons must be careful not to stifle the advancement of AI in other sectors, for example medical uses for AI technology, that benefit humanity. Governments must, therefore, be transparent regarding the intentions of the AI technology they invest in and develop.

The international system should be wary of the accelerated rate of AWS development taking place across many countries, and the subsequent dehumanization of warfare. Indeed the process of regulating AWS has begun, with 26 countries now calling for a ban on fully autonomous weapons, and the Convention on Conventional Weapons agreeing to begin negotiations on a legally binding treaty regarding the use of AWS in November of this year. It is hoped that these efforts will help to develop effective stipulations and new international norms that can curb and ultimately control the development of AI technology for conflict in the future.

Ruby Leonard

Related