Responsibility In Times Of Crisis: Trump, Twitter And The Harm Principle

The murder of George Floyd by Derek Chauvin, on May 25th, has sparked outrage across the world and led to an urgent revival of the Black Lives Matter movement. This movement is being played out in the streets of America and beyond, as protesters demand equality and radical change after 400 years of compounded systemic racism. The other arena where this civil rights movement is living and breathing is social media, with Twitter, Facebook, and Instagram full of information, solidarity, and outrage.

On May 28th, President Trump tweeted a threat to protesters that included the words “when the looting starts, the shooting starts.” This is a phrase copied from a 1960s Miami Police Chief in the civil rights era who told the press that “we don’t mind being accused of police brutality.” Twitter responded swiftly, hiding the tweet behind a content warning for “glorifying violence” that also prevented users from retweeting it. An identical post on Facebook has been left undisturbed. As the two social media giants take opposing stands, a debate has begun on the social responsibility of social media platforms and the enforcement of their community standards, which are designed to limit free speech when it poses a threat to a group or individuals.

Twitter explained its decision to hide the tweet behind a violence warning as one which struck “the right balance between enabling free expression, fostering accountability, and reducing the potential harm caused by these tweets.” In response, Trump signed a dubious, knee-jerk executive order to limit fact-checking and censorship more widely on social media platforms. Jessica Rosenworcel, a Democratic commissioner of the Federal Communications Commission (the body who would have to approve the order) argues that this is more in tension with the First Amendment than in harmony with it. The First Amendment proclaims that Congress shall make no laws “abridging the freedom of speech,” but this relates to governmental restrictions, not those of private companies, making it clear that there is no legal infringement made by Twitter’s decision to censor the tweet.

With a following of almost 82 million, Trump is Twitter’s eighth most-followed account as well as a heavy user of the site. Twitter is Trump’s favourite means of communicating with the public: the platform was pivotal in his election campaign, and is often the first place he turns when sharing an opinion or thought. For a long while, Trump’s relationship with Twitter was harmonious, evidenced in his 2016 tweet that “If the press would cover me accurately & honourably, I would have far less reason to ‘Tweet.’”

Twitter’s algorithms and trending topics gives attention to the loud, the controversial, the outspoken. The relationship between Twitter and Trump has been to some extent co-dependent, with blame on both sides for the scale of the audience that tweets by Trump – which often contain unsavoury comments or threatening language – can reach. Yet as citizens and consumers of news, we too have a part to play, our addictions to Trumpism, our tendency to engage with controversial statements and our consumption of news on social media have all helped to elevate his voice. Writing in the New York Times, Journalist Charlie Warzel explains this relationship as “a cycle that requires participation from all parties: the president (who initiates it), Twitter (which tolerates it), and the media (which amplifies, frequently to the president’s advantage).” Combatting such a cycle is not achieved by banning one party, but by reimaging the way we engage with news and social media.

Whilst Twitter acted to limit the reach of Trump’s comment, Facebook CEO Mark Zuckerberg allowed the post to remain in its original form, stating in a Facebook post that although he found the remarks by Trump to be “deeply offensive” he did not want his company to be the “arbiter of truth.” Yet, such an “arbiter of truth” claim, placed on either Facebook or Twitter, seems to sidestep the issue. Writing in The Guardian, Journalist Siva Vaidhyanathan argues that the argument has never been about claiming the truth but about harm, as “the standard for such content moderation should be the potential for harm (…) truth is not the real issue or the problem.” A pleasant experience for users is a central aim of Twitter and Facebook, and one they are entitled to moderate content in order to achieve.

Zuckerberg has faced a backlash for his response both internally and externally. One former employee, Timothy Aveni, quit and posted on his Facebook page that “Mark [Zuckerburg] always told us that he would draw the line at speech that calls for violence. He showed us on Friday that this was a lie. Facebook will keep moving the goalposts every time Trump escalates [and is] complicit in the propagation of weaponized hatred”. Additionally, on Saturday a letter signed by 214 scientists (95 of whom are receiving funding from Zuckerberg’s research companies) articulated that they were “deeply concerned at the stance Facebook has taken” and urged the Facebook CEO to “consider stricter policies on misinformation and incendiary language that harms people or groups of people, especially in our current climate that is grappling with racial injustice.”

Yes, free speech is a constitutional right. Yes, it is a right protected more heavily in the U.S. than in other western democracies. Yet whenever free speech is spoken about, it is continually established that this is not an absolute, concrete right, but one that is necessarily considered alongside the duties we have to others, and the rights others hold over us. Even theorists such as John Stuart Mill make an exception for what is commonly known as the harm principle: one of the greatest defenders of free speech, he believed in free speech’s inherent value as a means of pushing arguments to truth and logic, stated that “the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others.” Sanctions on speech, Stuart Mill argued, are legitimized when it is possible to show an attack has been made against an individual or group of persons.

One example of the harm principle is hate speech. Defining hate speech can be difficult as there is no clear universally agreed wording, and independent corporations such as Twitter are free to self-define. A definition proposed by the critical race theorist Mari Matsuda sees hate speech as “words that are used as weapons to ambush, terrorize, wound, humiliate, and degrade.” Trump’s tweet is a clear example of hate speech and the harm principle by these definitions, and thus, prohibiting such messages is justifiable in both the public and private sphere.

Some argue that Twitter should go a step further and suspend Trump’s account. His May 28th tweet is, after all, the latest in a long line of threatening statements the President has made on the platform. For proponents of this argument, banning Trump is holding truth to power for terms and service violations and removing the distraction and division his account creates.

Accountability for a world leader would show that an abundance of power does not make you exempt from the rules. This would not be a violation of free speech, as Twitter is a private company. Removing Trump’s account would not remove his free speech – it still exists – he would just have to exercise it on different platforms. This accountability argument does hold some sway, particularly considering Twitter’s evasive “newsworthiness” clause that exempts world leaders for service violations that would result in other accounts being taken down or suspended. It does not seem controversial to suggest that the higher the following of the individual the more risk a violent post creates, and therefore that in implementing the harm principle on social media, exceptions should not be made for those with a high following.

Yet an outright ban could involve Twitter in a free speech legal and philosophical debate it does not wish to be a part of. The response of adding a preview warning, which still allowed access to the tweet but limited engagement, does seem to be a measured and appropriate response that balances rights and duties. Moving forward, platforms must take seriously their own responsibilities in restricting the amplification of voices who call for violence and hatred, particularly in the context of the Black Lives Matter movement which is itself fighting against structural racism and violence. In other words, social media platforms should pay strict adherence to the harm principle.

Free speech is, rightly, an element of U.S. democracy of which so many are proud to apart of. But democracies are also formed on ideals of equality, of opportunity, and of dignity. To jeopardize these rights for the sake of an unabridged right to free speech is to jeopardize democracy itself.

Katy de la Motte

Related

Leave a Reply