Meta Faces Lawsuit For Fueling Ethiopian Civil War

Two Ethiopian researchers at the Katiba Institute in Kenya are filing a lawsuit against Meta for failing to remove incendiary content fueling the Ethiopian civil war, according to Reuters. Following the death of one of the researcher’s fathers, the duo is accusing the Meta AI algorithm of pushing hateful content that incited the murder as well as ongoing violence. Specifically, the lawsuit accuses Meta of failing to moderate its content, and desires reparations from Meta for the role it has played in inciting violence around the world. Meta has repeatedly denied the allegations and highlighted their efforts to improve content moderation. Yet, one of the researchers, who is currently being targeted for their ethnicity on the social media platform and actively reporting it to Meta, has not had their concerns addressed.

To learn more about the civil war in Ethiopia, including its historical context, please read our report here. In brief, amid rising tensions between the TPLF and Prime Minister Abiy Ahmed, the Tigray region held elections in 2020 despite Prime Minister Ahmed having cancelled them in the rest of the country. As a result, Prime Minister Ahmed withheld federal funding from Tigray and ordered a law enforcement operation. Since then violence has increased with disgusting accounts of war crimes coming from the locals.

This is also not the first time Meta has been accused of being complicit in escalating tensions in countries – including fueling civil war and genocide. Within the last few years, the company has come under United Nations scrutiny for accelerating anti-Rohingya rhetoric in Myanmar. An article for Amnesty International explains how other lawsuits are underway because Meta’s AI creates “echo chambers” of hate. A similar criticism of the AI-powered content algorithm was portrayed in the film “The Social Dilemma,” in which the directors highlighted the AI’s ability to radicalize and polarize those living in the United States. 

A Rohingya refugee told Amnesty International “‘I saw a lot of horrible things on Facebook. And I thought that the people who posted that were bad… Then I realized that it is not only these people – the posters – but Facebook is also responsible.’” In the same article, Amnesty International’s Secretary General, Agnes Callamard, is quoted as saying “Facebook’s algorithms were intensifying a storm of hatred against the Rohingya which contributed to real-world violence.” Facebook’s response to the Rohingya lawsuits, according to GBH News, is that they’ve “built a dedicated team of Burmese speakers, banned the [Burmese armed forces], disrupted networks manipulating public debate,” in line with recommendations from “civil society organizations and independent reports.”

Change is slow and the people killed in Ethiopia, including the researcher’s father, are victims of society’s inability to successfully live alongside Big Technology. Director of U.K-based nonprofit Foxglove, Rosa Curling, was quoted by Wired saying “Facebook can no longer be allowed to prioritize profit at the expense of our communities. Like the radio in Rwanda, Facebook has fanned the flames of the war in Ethiopia.” In the same article, Wired shares that more than 6 million people use Facebook in Ethiopia but that Facebook lacks the capacity to “properly moderate content in most of the country’s more than 45 languages.” This indicates that there are not enough resources available at Meta to properly function on the global scale it does.

Foxglove is doing a great job of fighting Big Tech and securing “Algorithmic Justice” by increasing transparency and accountability at the commercial and public levels using legal action. Foxglove’s recent wins include preventing an AI grading system from being adopted in the UK and abolishing a racist AI Visa-granting system. But it is now past time for politicians and companies to be proactive in helping society coexist with Big Tech. Science and technology are rapidly evolving and affecting all parts of life across the world. Although technology is helpful, we cannot turn a blind eye to its dangers and must be ready to adapt as we learn about the different impacts it has. Meta and governments around the world are aware of the dangerous effects of algorithms, such as their ability to be echo chambers that perpetuate hate – now is the time to take action to mitigate its negative effects.