Disinformation And Social Media: A Threat To Security And Peace

This year, disinformation from Russian sources was discovered in the lead up to the European Union’s (EU) parliamentary elections. Disinformation is the malicious intentional spread of false claims, as opposed to misinformation, which refers to the unintentional spread of misleading and false claims. Disinformation in and of itself is a fairly old phenomenon, but with the rise of social media, it has become an increasingly sophisticated and frequently used tool to achieve political aims. While the EU has been working with Facebook, Google, Twitter and YouTube to tackle online disinformation, it still poses a critical threat to peace and stability, particularly in countries with fragile security. Since alleged Russian interference in the 2016 elections for president of the United States (U.S.), disinformation has become more prevalent and pervasive, with little action being taken on the global stage.

Disinformation was present in almost every single major election and national poll conducted this year. In April, the New York Times released a report concerning the use of disinformation by major Indian political parties to discredit political opponents. In May, Indonesian officials and politicians struggled to deal with the deluge of disinformation, so much so that weekly meetings were conducted to distinguish between fact and fiction. According to Australian Strategic Policy Institute (ASPI), much of the disinformation originated from presidential campaigns seeking to win votes. A report conducted by the Atlantic Council’s Digital Forensic Research Lab (DFRLab) found disinformation in every African national poll held this year, including those conducted in Senegal, Toto, Niger, Nigeria and Mali. The disinformation spread in Africa was created by foreign countries, and co-ordinated by Israeli owned consultancy firm, Archimedes Group.

Facebook is the primary host of much of the disinformation being spread. Facebook is an effective platform for countries and groups seeking to spread disinformation, as the social media giant has millions of users across the globe, many of whom obtain much of their information and news from the site. The DFRLab has reported that the common tactics used by groups seeking to disseminate disinformation include the creation of media pages supporting or attacking specific politicians, as well as those pages that masquerade as legitimate news organizations. Most individuals are unaware that the information being presented is heavily biased, unreliable, and made to influence political outcomes. Other tactics included the creation of pages that appeared to disseminate media leaks, and those that appeared to function as fact checking pages. In both cases, the pages disseminated disinformation. A large proportion of these pages claimed to be run by groups or individuals within the target state. However, the pages were frequently run by groups in other countries, often situated in Israel, the United Kingdom, Portugal and Russia.

In December the EU launched the ‘Action Plan Against Disinformation’ in an attempt to work with Facebook and other organizations to interrupt the spread of disinformation. Prior to the plan, Facebook was aware of the problem, but had done little to prevent future outbreaks of disinformation. According to The New York Times, the hesitation was a result of politics within the company. Reluctance to get involved in the disinformation present in the 2016 U.S. presidential election was a key factor – to suggest disinformation and Russian interference might have influenced the election was considered by executive members of Facebook too risky, as it could be perceived as partisan support. Unfortunately, the lack of action had global ramifications. Disinformation has become a tool to not only advance politics, but to aggravate ethnic and societal cleavages. The EU’s plan effectively brought Facebook and other key players (Google, Mozilla, Twitter and YouTube) to the table.

The EU’s plan focuses on four key pillars: greater co-ordination between the EU and member states; the Implementation of the Code of Practice on Disinformation; improved awareness and societal resistance to disinformation through better communication, and media literacy programs; and protecting election integrity. The code is the primary mechanism that engages with online platforms such as Facebook. Its aim is to mitigate the power of online disinformation by increasing ad transparency and scrutiny, and removing malicious or manipulative content. In Europe alone, Facebook has removed 600 groups and pages that were identified as disinformation. Facebook has also begun to remove similar pages from the U.S. and Africa, amidst claims that the media giant is not pulling its weight to combat the problem in other countries.

While the EU presents a solid undertaking to weaken disinformation, other countries are seemingly lagging. According to Alina Polyakova and Daniel Fried of the Brookings Institute, the United States is far behind the EU. While the U.S. has begun to fund research into disinformation, this has not led to major breakthroughs thus far. Further, the U.S. Congress has drafted policies to address online ad transparency and co-ordinate responses to disinformation outbreaks, but the proposed legislation has not been passed into law. While disinformation presents a challenge to the U.S., it is even more detrimental in countries that do not have the resources to combat it.

The solutions presented by the EU currently serve as the gold standard for tackling disinformation. The plan fits within democratic norms of freedom of expression and freedom of the press because it only removes pages specifically designed to disseminate disinformation. It seems that the U.S. will follow suit, provided internal politics do not interfere.

However, the response by Facebook and other powerful online platforms has been uninspired. It has taken months for the online platforms to work within the code set out by the EU. This does not take into account the fact that Facebook were unable to prevent disinformation from infecting every national poll in Africa this year. In a report by the BBC, Facebook outlined a multifaceted approach to tackling and managing disinformation across the continent, including working with local fact-checkers in Cameroon, Nigeria, Kenya, South Africa and Senegal, and the creation of a content review centre in Nairobi. Facebook has also banned Archimedes from publishing content on the site.

While some have been critical of Facebook, Kwabena Akuamoah-Boateng, a Washington based Ghanian academic, suggests taking a more holistic approach towards the problem of disinformation, stating, “If we continue to have these discussions by singling out a company, we will achieve very little in the end. Everyone has a role here. Let’s take another look at government and security agencies’ roles.”

There is a lot of merit to this idea. It is surprising that disinformation is not more prioritised by African states and the African Union (AU), particularly considering the impact it may have on democracy, and the role it can play in aggravating ethnic tensions. In May, a joint communiqué was released on World Press Freedom Day calling all member states to abide by values of freedom of expression and journalistic freedom, to promote media literacy, and to work towards legislation that both undermines disinformation and promotes media freedoms.

Yet, in order to effectively tackle disinformation, several steps must be taken both within countries, and between them. Firstly, the U.S., the AU, and African states must make targeting disinformation a priority. While disinformation and Russian interference are likely to remain a political issue, at the very least, some legislation and policies aimed at educating the public in media literacy should be put in place. Similarly, while the AU and its member states have started to look into disinformation, greater efforts must be made in prioritizing the issue, particularly in drafting and implementing policies and legislation that will remove disinformation before it has a chance to become widespread. This is particularly important in Africa where disinformation has the potential to cause catastrophic damage to emerging democracies, and/or enflame ethnic grievances which may lead to violence.

In tackling disinformation, Facebook and other companies have started to remove misleading groups and pages, and ‘ban’ those responsible. However, greater transparency and co-ordination between online platforms and nation states would allow for more accurate measures, and better policies targeting disinformation. Facebook and other online platforms like it can no longer hide beneath a veil of ‘neutrality’. The platforms they present are structures that can be used in a myriad of ways. However, such structures favour those individuals and organizations that can afford to advertise. In doing so, such platforms allow targeted information to spread across the internet and this information (or disinformation as it were) has power. While it is important to continue to allow for the freedom of expression that individual users enjoy, online platforms must fully acknowledge the power they have, and install policies that ensure accountability of groups or individuals that wilfully spread misleading claims either through pages, or through advertising.

No doubt the path ahead will be a difficult balancing act for social media corporations and nation states alike, however, it is important that measures are taken to ensure that disinformation does not erode the truths necessary for democracies, security and peace.