Data Retention And “Pre-Crime”: Concerns For Privacy And Human Dignity


In China, the U.S., the United Kingdom and Canada, pilot projects of crime-predicting artificial intelligence have been utilized by police services in an attempt effectively prevent crimes from occurring. Predictive algorithms are vastly more comprehensive and wide-reaching compared with human analysis. With that said, there are very real ethical and privacy concerns, reminiscent of the futuristic, police state world of Philip K Dick’s short story Minority Report, where clairvoyant “precog” mutants advise the police of certain individuals who are about to commit a violent crime and thus can be stopped before it happens. At this stage of reality, the use of AI predictive software is an attempt to make assumptions based on data, mass surveillance, and crime reports gathered on individuals and geographic areas, something that police departments around the world have done for many years. Far more complex algorithms from a vast array of data sets and facial recognition software are now extensively used by modern police services to construct a working pattern of crime in an attempt to more accurately predict criminal intent.

Projects such as the Los Angeles Police Department’s PredPol initiative have reportedly resulted in a 33% decrease in burglaries, due to its ability to outline more efficient patrol routes, and its supposed success has seen it spread to other major U.S. cities such as Atlanta and Seattle as well as globally. Meanwhile, the Vancouver Police Department claims an 80% accuracy rating when using machine learning to detect crimes such as burglaries, which are considered to form coherent patterns that can be analysed. At this stage it may seem harmless enough, and indeed, it might be considered very positive from the point of view of those who believe crime to be out of control. Nonetheless, the use of statistical analysis to prevent crime leads to two serious societal problems. First, cultural, religious and racial stigmatization will increase substantially when it is assumed, based on data collected, that crimes will occur in certain areas. Second, privacy issues will come about when prolific data retention and surveillance is required for the predictive analysis to be effective.

Predictive analysis draws on a vast array of socioeconomic data, both from criminals and non-criminals, which means that there will inevitably come a time when such ‘predictive’ analysis reinforces and compounds socioeconomic, ethnic and racial stereotypes. According to the Washington Post, certain ethnic, religious or racial categories, or geographic “hotspots”, sometimes multiple city blocks, will be highlighted. These will then be more frequently targeted by police as the predictive analysis software crunches the numbers and advises police departments to increase their presence in certain areas where crime statistics are higher than others. This can only lead to law enforcement exacerbating stereotypes as they act on bias data. There is a difference between prediction and prevention, in the true sense of the word, which will be too great for predictive policing to ever overcome. As much as artificial intelligence can assist in establishing patterns of criminal behaviour, it is more of a process of discouraging criminal behaviour by bolstering police presence in “trouble” areas, according to Vancouver Chief of Police Adam Palmer. Actually preventing an individual from committing a crime before it happens strays too far into the realm of science fiction to be taken seriously. One could argue that predictive policing is not at all a new phenomenon when it comes to effective policing: police officers inevitably accumulate certain experiences of various neighbourhoods and come to know more closely individuals who may be more at risk of committing crimes than others. The difference is that for predictive analysis to work effectively, it needs to be fed an enormous amount of data, much of which is taken from third parties without consultation.

Retaining massive amounts of data and more extensive surveillance efforts is therefore an extension of the more traditional investigation techniques, though it undermines an individual’s rights to privacy for the “greater good” of preventing crime. Professor of criminology John Eck describes the process as a “ whack-a-mole policing mentality” that leans too much in favour of disregarding people’s rights to privacy and democratic values for the sake of predicting crimes that may not even occur.  Because of this dilemma, the continued implementation of predictive analysis will depend on the clash between a need for better policing services through big data collection and the protection of privacy and anonymity. For example, with the increased frequency of attacks carried out by the terrorist organisation Islamic State in Europe, there may be a very serious demand from society for the sort of predictive technology that could reduce or even prevent these attacks from occurring. To that end, state anti-terror laws, such as the U.S. Patriot Act, signed into law sixteen years ago this week by former U.S. President Bush after the September 11 terror attacks, allow investigative services to easily undertake surveillance operations on anyone suspected of terrorism. It is not impossible to imagine these same services being able to utilize predictive software to counter terrorism, without the need for public accountability, if the public is more concerned with safety rather than privacy. The same goes for spikes in violent crimes; if the public perception is that violent crime is out of control, there would presumably be a higher demand for AI predictive software. It boils down to an issue of security, which is inherently a political issue,  but at the very least policies related to intrusions of privacy must be made completely transparent if they are to even be considered in law enforcement.

As artificial intelligence and surveillance become more efficient, flexible and powerful, making up for the disadvantages of human fallibility, there is no doubt that it will be more actively utilized to solve the issue of how to more frequently prevent crime.  While this may not be such a significant concern insofar as the human discretion is not lost in the process of policing, only time will tell if concerns over privacy and surveillance of individuals, in order to gather as much data as possible on suspects, will prevent further development of these hybrid crime prevention initiatives. In April 2014, the European Court of Justice (ECJ) rendered invalid the Data Retention Directive enacted in 2006, which obliged EU member states to retain their citizens telecommunications data for anywhere between 6 to 24 months. The ECJ ruled that the Directive “interferes in a particularly serious manner with the fundamental rights to respect for private life and to the protection of personal data.” Such legislation is a crucial acknowledgement of citizens’ rights to privacy. However, policy makers will not find much difficulty in implementing these AI prediction measures according to public opinion: if the public feels unsafe or does not believe that the traditional law enforcement services are up to the task of preventing serious crimes, then predictive policing will gain much more traction as time goes on. 

Hugh Davies

Research Analyst at CAPA Centre for Aviation
Recently graduated from the University of New South Wales with a BA in International Studies. I have a passion for understanding how the world really works, writing about international affairs, and speaking French!
Hugh Davies

About Hugh Davies

Recently graduated from the University of New South Wales with a BA in International Studies. I have a passion for understanding how the world really works, writing about international affairs, and speaking French!