Fading Out: The Right To Be Forgotten Under International Law


The right to be forgotten, or the right of erasure, is a recently-posited right that only makes sense contextualized by modern advances in digital technology and telecommunications. Generally, it refers to the ability of an individual to autonomously decide to expunge personal data or information from the internet such that it is no longer accessible to the wider public. If the internet is a massive library, then an unchecked right to be forgotten would give any person the power to remove from that library some or all books that reference them. In an era where every drunken college picture is saved to Facebook’s memory banks, every errant political rant is immortalized in the storied annals of Twitter, and every unsubstantiated criminal allegation is preserved in the unyielding amber that is online news media, the right to be forgotten has emerged as a champion of individual privacy in the face of the omniscient, intangible internet.

Yet the debate is far from one-sided. Efforts to hide the truth, no matter how embarrassing, will always risk infringing on other human rights, like those of free speech and free press. And due to the global reach of key players like Google and Bing, efforts to develop a codified legal standard inevitably become entangled in the complexities of mediating ethics across wildly divergent cultural and national boundaries.

The primary argument surrounding the right to be forgotten is that individuals are entitled to have personal data erased, at least under certain circumstances. The specifics of what those circumstances are, of course, is where the argument becomes complex. Situations, where data is no longer necessary or accurate, tend to be the most focused on by supporters of the right. A more extreme application would give individuals unchecked power over any use of their personal information. Generally, the push for an acknowledgment of this right in legal codes is rooted in narratives of past information that is no longer relevant to the public good (if it ever was), with the category of “teenage mistakes” being the most widely cited. Washington Post policy writer Caitlin Dewey identifies three tiers of this right that vary in extent and import: “the right to delete things you’ve posted online, yourself; the right to delete something you’ve posted that someone else has reshared; and the right…to delete something that someone else posted about you, but that you otherwise had nothing to do with.”

Though this right has some tangential precedent in terms of how credit scores are treated, it is largely new ground for privacy advocates. And even if it might be easy to understand the impetus behind wanting to allow young people to move on from digital transgressions from their childhoods, it is harder to stake down where exactly that right should start and end — a triviality when speaking in abstract ethical terms, but vital to embodying this right in legislation.

Further complicating things, those on the other side of the debate have their own rights and privileges. Search controllers like Google who are tasked with cutting off the public’s access to expunged materials, as well as the news organizations and social networks themselves that publish the materials, are protected by their own ethical claims to free speech and the open flow of information. What more, their rights are, in many ways, the rights of individuals (not just those of corporations). After all, the reason that they have such rights in the first place is because there is a benefit to the public in having an extensive, easily-accessible archival record. The debate over the right to be forgotten, then, is not so much one of individual rights versus corporate rights as it is one of individual rights versus other individual rights.

This issue became widely publicized and hotly contested following a 2010 lawsuit in Spain. Complainant Mario Costeja González made an argument to the Spanish Data Protection Agency concerning a local newspaper that had included González’s personal information in a 1998 article that was still on their databases and accessible via Google Spain. His suit, leveled against both the paper and Google, argued that the information was old and no longer relevant and that references to him should be removed from the article and Google’s search results. The case was referred to the Court of Justice of the European Union (EU), with González citing a right to be forgotten found in the legal provisions of the EU’s 1995 Data Protection Directive. The court ultimately ruled in his favor for the claims against Google, with the European Commission explaining that “search engines are controllers of personal data” and that “EU data protection law applies and so does the right to be forgotten.” However, the precedent set was not an absolute one. The EU only ensures the legal right to be forgotten when the information or data in question is “inaccurate, inadequate, irrelevant, or excessive.”

Google is an American company and the information that González wanted to be erased was not physically processed within the EU, and so this is why the court’s ruling had international implications. In fact, critics have interpreted it as imposing one nation’s (or rather, one multinational union’s) laws on another sovereign nation. For instance, France’s data protection agency has since threatened Google with fines for not removing links from searches made in any country, not just those done from French IP addresses.

Aside from the fallout of the González decision, the right to be forgotten has gained traction on the global stage outside of the EU’s bounds. The American executive branch referenced the importance of mediating the issue in 2014, and a less extensive manifestation of the right has been legally guaranteed in California, and proposed in Illinois and New Jersey, for young adults to erase information published pre-adulthood. Others, especially Americans, have further cited examples ranging from leaked celebrity photos to former convicts’ criminal records as examples of where a right to be forgotten would be necessary.

Though these troubles have always been present, the recent shift in information technology to the one massive, unmediated internet has made the practical elimination of undesired personal information harder than ever before. And with the moral ambiguity of balancing human privacy with open access to information, a comprehensive solution has yet to be reached for this globally impactful debate.

For one thing, legislation has failed to take into account the variable contexts in which a claim to a right to be forgotten can be made. As has been discussed, the age of the individual at time of original publication is relevant; children’s information is generally treated differently than that of adults, even if initially published willingly. There also needs to be accounting for whether an individual is a public figure, as politicians would likely seek to use this right to erase information from the public knowledge that would deem them unfit for office. A distinction must be drawn between data publisher and data access provider; which is to say, between an online newspaper writing something and Google giving people access to that link with a search.

Though the EU decision only found that the right applied to the latter, the public’s lack of access to information is effectively a limit on free press — a newspaper that can print whatever it wants, but then has all its printed copies locked in a vault, would not be said to enjoy freedom of the press. And though the EU court encouraged the use of a balancing test to decide the merits of each claim towards erasure on a case-by-case basis, the truth of how this right is implemented means that Google itself (or whatever search engine is in question) decides whether or not to delete the targeted link unless the case actually goes to court — something unlikely to happen when the search engine risks having to pay steep fines if found guilty. The end result places the arbitrating power over this ethical conundrum in the hands of Google, who even if well-intentioned are likely to make the safe choice of deleting access to information rather than risking facing charges for not doing so.

Though research by The Guardian shows that 99% of expunged links contained individual’s private information and had little to contribute to the public good, Jules Polonetsky of the Future of Privacy Forum points out that outsourcing such decisions to corporations like Google sets a dangerous precedent going forwards.

The biggest issue at play, however, is what the right to be forgotten would mean for international law if actually implemented. The more privacy-oriented policies of the EU contrasted with the focus on free speech and press of the US illustrates why a consensus will be difficult to enact globally, even though the transnational nature of the internet and of digital entities like Google means that relevant laws will by their nature need to apply to both the EU, the US, and others. This is clear in the aforementioned case of the French data privacy agency threatening to fine America’s Google. Harvard Professor Jonathan Zittrain simplifies: “France is asking for Google to do something here in the U.S. that if the U.S. government asked for, it would be against the First Amendment.” Leaders at Google itself, meanwhile, as well as at Wikipedia, have additionally argued that allowing a single country to impose its conceptions of the right to be forgotten on a global information provider—not just within that country’s borders but globally—threatens sovereignty, regardless of what the correct balance is between privacy and information access.

The first step to resolving this issue is clarifying what exactly the right to be forgotten entails. With the EU’s various data protection authorities telling search engines what results they can and cannot provide, it is vital that they make clear to those companies what is actually required of them, and what content is and is not subject to erasure. In practicality, much of the case-by-case decision-making will be done by Google, Bing, and others, but the government agencies that make the policies should do their best to ensure clear guidelines for what information must be kept private and what should remain open to public access. The question of punishment for noncompliance must also be reconsidered; even if erasure of a given article or link is not required under the law, the threat of onerous fines might in the future prompt a chilling effect: corporations would rather play it safe and delete things that are not actually banned. A non-monetary punishment for failing to fulfill a request for deletion, an external review board that ensures appropriate decision-making or a fine that wrongful suit-bringers have to pay would all be policy mechanisms for reducing the threat of unnecessary deletions.

Reinterpretations or clarifications the EU’s legal code, and that of other nations could further ease this tension and reduce the conflict inherent in establishing an international norm on this topic. Politico writer Daphne Keller suggests some such options; for example, that regulators could exclude search engines or omit user-generated content from the law, or that companies like Google could be better enabled to argue that content they choose to keep accessible is relevant to the public good when questioned. On the other side of things, the natural tides of human nature may resolve some aspects of this issue without the need for legislative involvement. The rise of anonymity-based services like Yik Yak, or ones that prioritize the impermanence of information, as with Snapchat, suggest that people might be taking matters into their own hands rather than waiting for either governments or corporations to protect their secrets.

Ultimately, the global legislative community needs to acknowledge that the nature of media and information access if changing. Search engines like Google are just as important to free speech and press as traditional publishers, and the global nature of the internet means that the laws of one country can have an impact on an entirely different one regardless of differing valuations of the balance between privacy and knowledge access. Only by reconsidering the very category of “information” itself will a workable solution to this seemingly intractable issue of international law be reached.

Brian Contreras