The Dangers Of Unchecked Technology: Oppression From TikTok To Xinjiang


In September 2020, the Australian Strategic Policy Institute (ASPI) published an expansive report entitled “TikTok and WeChat: Curating and Controlling global information flows.” The report discusses the soft power of censorship that these apps exert onto their users. Broadly, it demonstrates how censorship policies of media companies can direct or influence public discourse globally. Particular to TikTok, the paper explores hashtags that were being shadow-banned. The hashtags that were banned were all issues that carry great political consequence. The second major issue raised in this report was the use of the apps in Chinese propaganda efforts, particularly to suppress discussion that surrounds the persecution of the Uyghurs from Xinjiang province in Northern China. This leads into a wider investigation into the developments in surveillance technology that is assisting with the violation of human rights in China and the potential exportability of that technology. Together these examples highlight how integral technology is becoming in political discourse and to the practice of policy. Unfortunately, the lesson drawn from these examples is how technology is used to disrupt political discourse and oppress innocent peoples.

When considering the context of the ASPI report, the year 2020 has been incredibly difficult with the coronavirus pandemic and there have been a number of notable events. This study of censorship has corresponded with a number of such events. During the George Floyd protests in the United States #ACAB (All Cops Are Bastards) was shadow-banned. Hashtags that relate to the ongoing democratic protests in Thailand like ‘Why do we need a king?’ were removed instead of shadow-banned. Outside of these specific events, hashtags related to the criticism of political leaders were shadow-banned. These ranged from phrases like ‘Putin is a thief’ to seemingly innocuous ones like ‘Jokowi’ which is the commonly used nickname of Indonesian President Joko Widodo. These phrases are shadow-banned on the basis of TikTok’s restrictive moderation policies which claim to act in concert with local laws.

The report finds that TikTok’s moderation practices go further than that. Politically sensitive topics have been found to run into issues of suppression across the platform. For example, hashtags that explored LGBTQ+ ideas were banned in both Arabic and Cyrillic texts. Similarly, all Thai language anti-monarchist hashtags were banned instead of only posts made within Thailand. This restrictive system of content moderation, implemented through language tracking rather than location, stymies public discourse and restricts the freedom of expression.

This blunt form of censorship is egregious because oppressive states can use TikTok, and platforms, to suffocate international discourse regarding issues deemed sensitive by the state. In investigating TikTok as a case study the link between business and state is quite clear. To maintain function of their state according to party values, the Chinese Communist Party (CCP) has in place a number of laws that forces cooperation between the government and businesses. As a result of this link, TikTok plays a role in the suppression of criticism regarding the treatment of Uyghurs in Xinjiang. The report outlines the two means of suppression that occurs: the dissemination of disinformation that depicts an idyllic version of Xinjiang and the outright banning of content that attempts to reveal the extent of state oppression in the region.

The creation of disinformation, information that is false or misleading that is spread with the intention to deceive, is becoming an increasingly relevant issue regarding the good function of democracy and, more generally, a free and open internet. This is not an issue that is unique to TikTok and Xinjiang, but it is endemic to the malicious use of social media platforms. Platforms like Facebook, Twitter, and YouTube face a similar problem but in reverse, without strict enough moderation bad actors are able to spread convincing disinformation. Russian bots using twitter and other platforms are known to be particularly effective at creating and circulating disinformation. The malicious use of technology and the manipulation of the public conversation has significant consequences for the people who do believe them.

Sophie Marinaeu discusses the propagation of dangerous conspiracy theories related to the COVID-19 pandemic and the 5G networks, and the pernicious methods that have been employed to spread fake news. In this example the connection between misleading and dishonest tweets and repercussions in the real world is staggering. How many deaths due to COVID-19 could have been avoided if there was no room for these lies to circulate? How much damage was caused to internet and cellular infrastructure because of untrue claims about 5G networks? Without clear international effort to create universal guidelines for digital governance, the ability foreign actors have to spread dangerous lies will persist. Social media and instant communication have become an essential part of modern life, and its unchecked abuse will be an existential threat to effective communication between and within states.

Returning to Xinjiang, the restriction of information about the region to the international public is a national interest of the CCP. The CCP is keen to minimize the negative attention it receives from the global press, and the censorship of the issue on social media is one tool to keep it out of the headlines. A key aspect of this story that is often missed is that this state sponsored suppression is a misguided scheme to restrict terrorism in China. The social calculations that inform the ‘Strike Hard Campaign Against Violent Terrorism,’ has correlated religiosity with terrorism. This means that the policy is informed by the belief that if the government erodes the influence of Islam in Xinjiang, it will minimize the likelihood of violent terror attacks in China. As such, Xinjiang is a testing ground for the invasive surveillance technology being developed by the CCP. In 2019, Human Rights Watch published a report wherein they reverse engineered the Integrated Joint Operations Platform (IJOP), which is the mobile application used by the state to police and suppress the Uyghurs.

The HRW has determined that IJOP is designed to perform three functions: collect personal information, report suspicious activity, and use algorithms to prompt investigation into people. The scope of these functions is frightening. Personal information collected by IJOP ranged from their height to the colour of their car, in additional to a suite of other data like location, ID numbers, and household utility usage. What is described as ‘suspicious behaviours’ are wide ranging and seemingly innocuous activities. The report alleges that this includes a variety of non-violent behaviours like not interacting with the neighbours, using virtual private networks, or using encrypted communication apps like WhatsApp. Furthermore, the app collects audio-visual information automatically which allows the government to monitor personal relations, and religious observance, in an attempt to identify potential dissidents.

In addition to the enormous amounts of data taken without consent, the report identifies that the IJOP app also rates the administrators that use it. This internal system encourages administrators to take proactive actions to keep performance scores up, compelling agents of the state to act with frequency. This digital prison the CCP has created to monitor Xinjiang is the perfect partner to the physical and cultural controls embedded into the administration of the region. These controls range from complex border security protocols that greatly limit transit in an out of the region, to intrusive home visits to conduct interviews that enquire about personal issues. On top of these restrictions that impinge upon daily life, there are other restrictions that have been implemented to erode the unique culture of the Uyghurs. These restrictions include the development of re-education camps, the destruction of mosques and other significant religious sites, and the invasion of state approved cultural icons into traditionally Uyghur cultural spaces like mosques.

Taking stock of the situation it is clear that the system is taking in huge volumes of data and administrators are being compelled to act on it. It is difficult to imagine that such high quantities of information can be used usefully or efficiently, especially when analysts are pressured to meet quotas. Further, how can we believe that this system, as represented by HRW, is used with the precision the government claims. Even if this system did work to prevent terrorism, those positives are thoroughly undermined by the severe violations of human rights experienced by the Uyghurs both those born of the IJOP and the extra measures put in place to control the population.

Certainly, the development and implementation of this technology is a horrifying invasion of privacy and is contributing to the destruction of culture. Even more concerning than this is the potential exportability of IJOP to other countries. In reflecting on this possibility, the need for clear universal governance on digital resources is becoming more evident. The establishment of international norms that outline what reasonable development of cyber capabilities looks like, especially in terms of censorship and surveillance, will be essential to discouraging this behaviour in the future. By creating common terms that likeminded countries agree on, it will provide justification for multilateral action against the use and proliferation of technology that is deviant to the expectations of the international community.

Leave a Reply