Ukraine Has Started Using Clearview AI’s Facial Recognition During War

Ukraine will employ Artificial Intelligence (AI) technology from the American company Clearview to identify refugees and families, as well as potential Russian assailants and combat misinformation, according to an exclusive Reuters publication on Sunday, March 13th. Other Western technology companies have also offered cybersecurity tools to support Ukraine in its fight against Russia, but Clearview’s enormous facial recognition technology poses ethical and security challenges for Ukraine and the rest of the world during and after the conflict. Misuse or abuse of AI technology could lead to arrests of innocent people through misidentification, civilians’ loss of control over their data, and potential co-opting of this data by the Russian government. 

Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project in New York expressed his concern about the use of this AI technology in warzones, “we’re going to see well-intentioned technology backfiring and harming the very people it’s supposed to help” (Reuters). Clearview Chief Executive Hoan Ton-That expressed that the technology should not be used in violation of the Geneva Conventions, a set of treatises that define international humanitarian law (Reuters). 

AI’s importance in reuniting refugee families or identifying the dead cannot be understated during a crisis that has already produced more than two million refugees (BBC). However, the potential short and long-term dangers of AI technology may outweigh the benefits. As Albert Cahn mentioned, misidentification could result in the deaths and imprisonment of innocent civilians. The database information from Clearview could potentially be stolen and wielded by the Russians to persecute Russian dissidents and Ukrainians. In the long term, any government or private corporation having access to such a wide range of individual information, especially without consent, could pose concerns for privacy and regulation of information. Putin’s bans on any non-state information sources to block news of the war (BBC) demonstrates the dangerous consequences of any corporation or government allowed to determine a sole source of information and amass huge amounts of control over the populace and their data.

Clearview AI is an American facial recognition company that markets particularly towards law enforcement, but also universities and individuals. Whereas most fingerprint and facial identification programs mostly have access to criminal or government records, Clearview has a database of more than 10 billion images taken from various media sites across the world, claiming to gather data in the same way as Google Search (Reuters). However, security agencies of governments from the United Kingdom to Australia have banned or restricted the use of the technology on the grounds it violating citizens’ privacy. Most recently, on March 9th, 2022, Italy’s data protection authority fined Clearview €20 million and ordered it to delete Italian citizens’ facial biometrics (IAPP). Large media companies such as Facebook/Meta Platforms and Twitter have also demanded Clearview stop taking their data. With this expansion into Ukraine, Clearview could be used almost as a weapon, allowing Ukrainians to identify Russian operatives. However, as Cahn notes, “once you introduce these systems and the associated databases to a war zone, you have no control over how it will be used and misused.” 

While Clearview did not offer its technology to Russia, the Russian government is also wielding significant artificial intelligence technology, according to Samuel Bendett, advisor at the Center for Naval Analyses. Some of these programs had already been used during Russian involvement in the Syrian conflict, with drones relaying constant situational combat awareness. Russian interest in artificial intelligence increases the incentive for a Russo-Chinese partnership to facilitate the development of new technologies to be used both during war and peacetime (Politico). China itself has already been using facial recognition technology to specifically target and profile the Uighur ethnic minority (New York Times). 

In 2020, the United Nations Office for Disarmament Affairs warned that “certain uses of AI could undermine international peace… accelerating the pace of armed conflicts, or loosening human control over the means of war.” Faced with the massive Russian threat, Ukrainians need the support of Western governments and companies to combat physical and cybersecurity threats. At the same time, individuals and governments should remain aware of the potential dangers of the use of artificial intelligence in war zones.

Related