U.N. And Red Cross Call For International Restrictions On Autonomous Weapons

The U.N. and the International Committee of the Red Cross (I.C.R.C.) called for the establishment of regulations on autonomous weapon systems, in a joint press release issued on October 5th. Autonomous weapons systems are ones which “select targets and apply force without human intervention,” according to the letter. The U.N. and I.C.R.C. warn that autonomous weapons could change how wars are fought, including by reducing the perceived human cost of warfare and therefore escalating violence, and state that weapons which can independently decide to kill and whose effects are unpredictable should be prohibited. Other types of autonomous weapons should have restrictions imposed, they say, to uphold ethics and international law.

Without an international agreement, actors may disagree over how existing international law applies. The U.N. and I.C.R.C. state that their concern has increased as a result of advancements in robotics and artificial intelligence (A.I.) which could be integrated into the use of autonomous weapons. While A.I. is not always used in autonomous weapons, it can significantly increase those weapons’ capacity to make decisions independently, according to the U.N. Office for Disarmament Affairs. The U.N. and I.C.R.C. believe that negotiations between world leaders must start imminently, in order to protect humanity.

This letter comes at a time when new technologies are increasingly seeping into all facets of warfare. The New Scientist reported in October that Ukraine has utilized A.I. technology in autonomous drones, which can identify and attack vehicle targets independent of human control. A.I.-generated audio clips of Omar al-Bashir, former president of Sudan, have been circulating amidst the country’s civil war, the B.B.C. reports. Meanwhile, Bloomberg reported in July that the Israeli Defense Forces have used A.I. for data processing to suggest air strike targets, and then to suggest more details about air raids, with human oversight. These examples illustrate the three manifestations of A.I. in warfare that the I.C.R.C. outlines in an article from the start of October: use in weapons, cyber and information, and decision-making.

Although some believe that incorporating A.I. decision support systems into military decision-making could aid compliance with international law and reduce civilian harm, many have called for increased discussion and regulation of A.I.’s military applications. The I.C.R.C. said in an article published October 6th that advances in military use of A.I. raise “profoundly worrying questions for humanity,” and that the international community must form “a genuinely human-centred approach to the development and use of A.I. in places affected by conflict.” Computer scientist Stuart Russell has long argued that AI selection and targeting of humans should be banned, telling the Financial Times in 2021 that “such dehumanization of life and death decision-making by autonomous weapons systems must be outlawed worldwide.” And in a Foreign Affairs article the magazine published on October 13th, Henry Kissinger, former U.S. Secretary of State, and Graham Allison, former U.S. Assistant Secretary of Defense, warn of a potential A.I. arms race between the U.S. and China – which the authors describe as the only two “A.I. superpowers” at present – and argue that national (bipartisan) and international efforts are necessary.

Amidst all this, the start of November sees the U.K. hosting the first global A.I. safety summit. The agenda for the summit does include some implications of A.I. for bio- and cyber-security. However, it does not attempt to fully address the implications of A.I. for warfare. Certainly, a two day summit cannot cover every AI-related threat, but it could be argued that the summit’s focus on “misuse” by “bad actor[s]” (as outlined in the U.K. Government’s “Introduction to the AI Safety Summit” document) overlooks the implications A.I. brings for warfare and global distributions of power, including the use of A.I. in the military strategies of countries allied with the U.K., who are unlikely to be considered “bad actors.”

Whether at November’s summit or soon after, it is clear that there is growing support for an international dialogue concerning autonomous weapons and the implications of A.I. for warfare, with the aim of preserving international humanitarian law.