U.S. Leads Groundbreaking Global Collaboration On Military A.I. Guidelines

In a recent gathering of politicians, tech executives, and researchers in the U.K., discussions on the potential risks of artificial intelligence extended beyond the fear of algorithms turning against humanity. A notable outcome was the progress made in controlling the application of A.I. in military contexts, representing what one expert called a significant shift in the discourse surrounding autonomous weapons.

On November 1st, at the U.S. embassy in London, U.S. Vice President Kamala Harris not only announced a series of A.I. initiatives but also unveiled a groundbreaking declaration signed by 31 nations. The declaration aims to establish voluntary guidelines, or guardrails, around the military use of A.I. Signatories commit to utilizing legal reviews and training to ensure compliance with international laws, developing A.I. technology cautiously and transparently, avoiding unintended biases, and fostering ongoing discussions on responsible technology development and deployment.

While the declaration, initially drafted by the U.S. after a conference in the Hague in February, lacks legal binding, it marks the first major agreement between nations to impose voluntary regulations on military A.I. As part of broader efforts, the declaration outlines plans for the signatory nations to convene in early 2024 to continue discussions. Simultaneously, the U.N. General Assembly announced a new resolution calling for an in-depth study of lethal autonomous weapons, potentially paving the way for restrictions on such weaponry. The U.S. has also sought agreements from other nations to affirm human control over nuclear weapons.

Lauren Kahn, a senior research analyst at the Center for Security and Emerging Technology at Georgetown University in the U.S., hails the declaration as “incredibly significant” and a practical path toward a binding international agreement on norms governing the development, testing, and deployment of AI in military systems, rather than a circular conversation around the common talking point of A.I. weapons’ potential to make autonomous decisions regarding use of lethal force. While some nations have resisted calls for an outright ban, the current declaration’s focus remains on ensuring transparency and reliability in AI applications. Kahn emphasizes the importance of this focus, especially as militaries explore diverse ways to harness AI, acknowledging the potential for destabilizing or dangerous effects. Maintaining these priorities ensures that common-sense principles that all nations can agree upon are embodied in the international agreement.

Indeed, Vice President Harris announced that the declaration has garnered support from U.S.-aligned nations, including the U.K., Canada, Australia, Germany, and France. However, China and Russia, viewed as leaders in autonomous weapons systems, did not sign the declaration. Russia also dissented from the new U.N. resolution, asserting it would undermine existing work on autonomy under the Convention on Certain Conventional Weapons and pointing the possibility of a malfunctioning A.I. system triggering an escalation in hostilities.

While efforts to regulate military A.I. gain momentum globally, concerns remain about the limited impact without the inclusion of major players like China (which did, however, sign a separate declaration on A.I. risks during the British-co-ordinated A.I. Safety Summit). The issue is anticipated to be a focal point in discussions between U.S. President Joe Biden and Chinese leader Xi Jinping during the Asia-Pacific Economic Cooperation summit.

Despite the focus on lethal autonomous weapons, debates often stall on theoretical systems that do not yet exist. While some advocate for a complete ban on lethal autonomous weapons, recent developments include the U.N. General Assembly’s First Committee approving a resolution calling for a comprehensive report on challenges posed by these weapons. The resolution seeks input from various entities, including international organizations, the Red Cross, civil society, the scientific community, and industry. As discussions unfold, the Future of Life Institute, a nonprofit advocating for an outright ban on lethal autonomous systems targeting humans, sees these developments as a significant step toward a legally binding instrument, as envisioned by the U.N. Secretary-General for 2026.

While autonomous weapons already exist in some defensive systems, reports of potential use of lethal systems incorporating modern A.I. in warfare are limited. Instances include a drone deployed during the Libyan civil war in 2020, which may have used lethal force without human control, according to a 2021 U.N. report. Additionally, reports suggest the development of lethal autonomous drones for Ukrainian forces in response to Russia’s invasion. This rapid deployment of new technologies on the Ukrainian battlefield has driven global interest in military A.I. applications. The Pentagon, for example, is experimenting with incorporating A.I. into smaller, more cost-effective systems to enhance threat detection and rapid response.

Washington’s political declaration outlines measures for the responsible development, deployment, and use of A.I. military applications, emphasizing accountability and a “responsible human chain of command and control.” Going forward, efforts to regulate A.I. will likely be a crucial aspect of discussions between the U.S. and China, with potential pledges on the line to ban A.I. in autonomous weaponry and in the control and deployment of nuclear warheads.

M. Shanawar Khan

Related