Secretary General Proposes New UN Agency To Mitigate AI Risks

The UN Security Council held its first ever meeting about artificial intelligence on July 18th, 2023. The Security Council was briefed by experts in AI with Secretary General António Guterres warning that “Both military and non-military applications of AI could have very serious consequences for global peace and security.” Guterres backed calls for the establishment of a UN agency to govern AI in the face of the “catastrophic and existential” threat it poses. He suggested that the new agency could be modelled on the International Atomic Energy Institute, which monitors the use of nuclear technology to limit the proliferation of nuclear weapons.

Guterres’ proposal was met with a muted reaction from representatives of the five permanent members of the Security Council. Zhuang Jun, the Chinese Ambassador to the UN, said that any new laws around AI must “allow countries to establish AI governance systems that are in line with their own national conditions.” He added that China “firmly opposes” any actions which would obstruct technological development. The Russian Ambassador to the UN Dmitry Polyanskiy expressed doubt that the Security Council was the right venue for debates about AI, suggesting that “What is necessary is a professional, scientific, expertise-based discussion…at specialised platforms.” Speaking to Al Jazeera following the meeting, UK Foreign Secretary James Cleverly stressed that it was impossible to know what would be the most effective way to regulate AI.

It is crucial that the use of AI in warfare be regulated, but the Secretary General’s ambitious plan is flawed. The chilly attitude of veto-wielding members of the Security Council towards the proposed agency indicates the risk of alienating powerful states by seeking to impose a single, overarching regulatory framework. A more prudent approach would be to focus on the most urgent challenges where consensus already exists. In conversation with the BBC after the Security Council meeting, Executive Director of the Future of Life Institute Anthony Aguirre advised that states focus on banning the involvement of AI in the command-and-control systems of nuclear weapons. This stands a greater chance of success than the establishment of a flagship agency but would nevertheless be a significant achievement for global security. Recommendations for the international regulation of AI published in July by the Simon Institute, a think tank which lobbies the UN, include “avoiding hurried moves and rushed proposals for new institutions” for the same reason.

Attempts to regulate how AI is deployed in conflict are still in their infancy. The EU AI Act, expected to come into force at the end of 2023, explicitly excludes the military use of AI from its purview, despite being billed by the EU as “the world’s first comprehensive AI law.” Non-binding guidance on the ethics of AI published by UNESCO in 2021 covers 11 policy areas, but is likewise silent on armed conflict. Fortunately, efforts are underway to plug this gaping regulatory gap. The UN Secretary General published a policy brief on a new Agenda for Peace in July, which calls for international regulation of autonomous weapons systems by 2026. The first global summit on AI safety will also take place in the UK later this year, although a programme has not been announced.

Progress on the governance of AI in conflict cannot come soon enough. If experts are correct, the stakes could not possibly be higher. “The United Nations must play a central role to set up a framework on AI for development and governance to ensure global peace and security,” Professor Zeng Yi, Co-Director of the China-UK Research Centre for AI Ethics and Governance, told the Security Council. “AI risks human extinction simply because we haven’t found a way to protect ourselves from AI’s utilisation of human weaknesses.”

Matthew Price

Related