The Growing Relationship Between Disinformation And Racist Ideologies

After the deadly stabbing of three young girls in Southport, England at the end of July, far-right and anti-immigrant riots quickly spread across Britain, with many violent demonstrators attempting to link the crime—without proof—to Muslim immigrants. From the first day of the riots, far-right individuals weaponized social media platforms, most significantly X, to spread disinformation about the stabbing. For example, a British anti-Islam campaigner, Tommy Robinson, posted a video with the caption: “There’s more evidence to suggest Islam is a mental health issue rather than a religion of peace.” This video was viewed more than 1.4 million times, and posts similar to it have helped empower hundreds of protesters to express their rage in the streets. Although the perpetrator was actually a British 17-year-old of Rwandan origin born in Wales, attempts to debunk falsehoods remained unsuccessful, and social media users continued to advance narratives that demonized Muslims and migrants, leading to mosques being vandalized, businesses being set ablaze, and Muslims being attacked. This event highlights the increasing relationship between disinformation and the promotion of racist ideologies—an issue that should urge social media companies to strengthen their content moderation policies.

The salient role of false information in spurring the riots has been further established by the recent arrest of a man in Pakistan named Farhan Asif on charges of cyberterrorism for his efforts in spreading disinformation about the stabbing. According to Imran Kishwar, the deputy inspector general of investigation in Lahore, Asif operated Channel3 Now, an X account that misleadingly depicted itself as a legitimate news channel. This account was among the first to incorrectly report that the perpetrator of the stabbing was a man named Ali Al-Shakati, a claim that was not substantiated by any official source. Based on a report by the Washington Post, Asif alleged that he spread the false information to gain more viewers and income. This type of disinformation is part of a larger phenomenon called engagement farming, a social media tactic where users create and share content to increase engagement, such as likes, comments, and reposts with the aim of increasing their account visibility by attracting new followers or gaining larger financial benefits.

More recently, though, the nature of disinformation has taken a dangerous turn. This shift has become particularly evident with Elon Musk’s takeover of Twitter over a year ago, where content moderation policies have become substantially weaker. Since taking over the company, Musk has laid off thousands of employees, many of whom were from the trust and safety team, a group responsible for countering false information on the platform. In addition to this, Musk has permitted far-right figures such as Tommy Robinson and Andrew Tate back onto X after they were suspended from the platform for breaking rules regarding hateful conduct. These changes have dangerous implications, as they have the potential to exacerbate an already polarized environment.

To begin to counter the negative effects disinformation can have on marginalized communities, content moderation policies must be reevaluated to ensure digital spaces promote a safe and inclusive online environment. This means companies must put people over profit, and remove posts containing false information. To incentivize social media companies to enact such policies, reputational awards in the form of grants could be offered to platforms that demonstrate a commitment to tackling false information. However, it is insufficient to remove content once it has been posted due to the speed at which information spreads online. As a result, social media companies must take a more proactive approach, and track accounts based on their probability of disseminating false information by evaluating past behaviors. Furthermore, content moderation policies for tracking disinformation cannot continue to solely rely on AI, primarily because of its lack of emotion and inability to understand sarcasm and slag, rendering human oversight a necessary component to examine cultural references that AI could miss. These strategies will not only help to tackle disinformation but also address how it targets people based on their identity.

Although there is currently no specific technique or protocol to deliver complete security from viewing false information, urging social media companies to take part in the fight against inflammatory content is imperative to lessen the risks of disinformation and protect individuals from both digital and physical hate.

Related