Israel’s Habsora AI System Makes War Less Human

On November 30th, the Israeli publication +972 released an article investigating an IDF artificial intelligence system in use since 2011. According to multiple interviews with IDF representatives and former members of the armed forces, “Habsora” (The Gospel) interprets raw intelligence data to match Hamas fighters with buildings or specific physical locations and generates targets for human intelligence officers to recommend strikes against. +972’s reporting unveils the reliance on artificial intelligence for continuing military strikes in the Gaza enclave, demonstrating a potential contributing factor to the unprecedented levels of devastation.

According to an interview with former IDF Chief of Staff Aviv Kochavi by the Israeli newspaper Ynet, “From the moment this [AI system] was activated it generated 100 new targets every day. In the past, there were times in Gaza when [Target Administrative Division] would create 50 targets per year. And here the machine produced 100 targets in one day.” Further reporting by +972 reveals the mass production mindset of IDF targeting staff as a consequence of the sheer number of targets generated by Habsora. According to an anonymous source within the Targets Administrative Division, “It really is like a factory. We work quickly and there is no time to delve deep into the target. The view is that we are judged according to how many targets we manage to generate.” An additional anonymous source confided with +972 that official IDF policy has changed to reflect a new, target-rich environment:

“When the general directive becomes ‘Collateral Damage 5,’ that means we are authorized to strike all targets that will kill five or less civilians — we can act on all target files that are five or less… In my time, if the house I was working on was marked Collateral Damage 5, it would not always be approved [for attack]…To my understanding, today they can mark all the houses of [any Hamas military operative]… That is a lot of houses. Hamas members who don’t really matter for anything live in homes across Gaza. So they mark the home and bomb the house and kill everyone there.”

The deployment of AI on the battlefield has been foreseen for almost a decade. However, despite the collective acknowledgment of a new paradigm in conflicts, the international community has failed to create meaningful regulations for AI and its application to military forces. The unregulated AI space is a breeding ground for untested methodologies that will result in civilian death and unnecessary destruction. Modern AI has proven to be effective but problematic in its methodology, with logical shortcomings of AI programming exasperated by deployment in combat areas. The introduction of AI systems in the conflict zone dilutes responsibility for atrocities or mistakes. Nation-states know AI is a legal gray area and will take advantage of its novelty to operate unethical systems, such as Habsora. By digitally delegating the immense and invaluable task of determining if a building is a hostile stronghold or a civilian shelter, the IDF has condemned hundreds of innocent people to die.

Israel may be one of the first nations to deploy an AI system for intelligence-gathering, however, the Habsora system will be studied by major nations and a similar system will likely be seen in future U.S./N.A.T.O. counter-insurgency operations. It is also likely the Habsora system will be developed for monitoring Palestinians after combat in Gaza ceases. Israel is a historic leader in applying emergent technologies to military and policing roles. According to Amnesty International, Israel has developed Wolf Pack and Blue Wolf, automated facial recognition systems in the West Bank. Wolf Pack is an automated data collection system that scrapes facial scans from thousands of CCTV cameras, while Blue Wolf is a smartphone application for IDF soldiers to check the “Facebook for Palestinians,” as it is referred to by occupation forces.

The brutal air campaign launched against Gaza may now have a grim reality behind it. The Habsora system has created an intelligence ecosystem that encourages maximum violence, as the system creates more targets than can be realistically hit in a single day. The sheer amount of targets provided by Habsora makes the task of verifying AI data a fool’s errand. Artificial intelligence must be thoroughly regulated on an international level, and its use with military force must be limited.

Related