Facial Recognition, Artificial Intelligence And Racial Disparity


A recent study has illuminated an issue in the artificial intelligence community. As reported by the New York Times, facial recognition (software that identifies a person’s identity in a photograph) is 99% accurate when the person in the photo is a white male. Unfortunately, the study also shows that the darker the skin the more errors are likely to arise in the technology. This disparity is up to 35% with darker skinned women. The study was conducted by Ms. Boulamwlni, a researcher at M.I.T.  Media Lab and shows how real life gender biases are transferred to artificial intelligence. At the age of 28, a Rhodes Scholar and Fulbright fellow, she is an advocate for ‘algorithmic accountability” according to the New York Times, striving to make automated decisions more transparent, explainable and fair”.

Ms. Boulamwlni’s most recent paper studied the performance of three leading facial recognition systems by Microsoft, IBM and Megavii (of China) by “classifying how well they could guess the gender of people with different skin tones”. She used a data set of 1270 faces to test the commercial systems. The New York Times explained that faces of lawmakers from countries with a high percentage of women in the office were used. Three were African nations (commonly with darker skinned populations) and three were Nordic nations (commonly with lighter skinned populations). Further, a six-point dermatologist system used in the medical world was employed, due to its reputation for being more objective than race.

The study found that Microsoft’s error rate for darker skinned women was 21% while IBM and Megavii’s was nearly 35%. All had an error rate of less than 1% for white males.  Her study is the first work to empirically show that there are disparities in artificial intelligence.

IBM provided a statement in response to the study explaining that they have improved their technology and are deeply committed to unbiased and transparent services. The company added that it hoped to unroll a new system with an improved recognition system for darker skinned women this month. Microsoft’s comment followed similar lines. Megavii, however, is yet to make a comment. Daren Walker, the President of the Ford Foundation said that the research provided a platform for opportunity, as there is a battle going on for fairness, inclusion, and justice in the digital world”. Ms. Boulamwlni stated that “technology should be more attuned to the people who use it and the people it is used upon. You cannot have ethical artificial intelligence that is not inclusive. Whoever is creating the technology is setting the standard”.

Boulamwlnis study highlights the racial and gender disparity not only in the tech world but also in our wider society and the ramifications artificial intelligence has on our day to day lives. Artificial intelligence is predominantly created by white men. In the tech world: “data rules and the technology is only as smart as the data used to train it”.  An astonishing 75% of the dataset used for AI is 75% male, and 80% white. Thus, there are many more white men in the system than darker women which in turn means that the system is worse at identifying correctly, darker skinned women.

While AI is used for a variety of reasons, facial recognition technology is seeping into the toolkit of law enforcement. The Georgetown Law School estimates that 17 million American adults are currently in the facial recognition system for law enforcement purposes. The flaws in the facial recognition technology mean that African Americans are most likely to be singled out and wrongfully accused. This is because they are disproportionately represented in mug shots and artificial intelligence is more likely (35% more likely if you are a darker skinned woman) to mistake one’s identity. If you are a dark-skinned woman, the chance of you being wrongfully treated in the American legal system is 1 in 3 because AI has misidentified you.

The “slip-ups” are not minor either. In 2015 Google had to apologize after its facial recognition app labeled African Americans as gorillas. Facial recognition technology is lightly regulated and there are calls to make it more socially accountable.

Accountability, transparency and tight regulations on use are the best tools we have to ensure that racial rifts are not sustained or worsened by technology. Boulamwlni’s research is a “gutsy” step in the right direction so that we do not blindly establish yet another system that benefits only some sectors of society while wrongfully punishing others. Her research has laid the foundation for a fairer and more equitable future in facial recognition technology.

Megan Fraser

Megan is a Postgraduate student at the University of Canterbury New Zealand. She studying towards a Masters of Laws in International Relations and Politics.
Megan Fraser

About Megan Fraser

Megan is a Postgraduate student at the University of Canterbury New Zealand. She studying towards a Masters of Laws in International Relations and Politics.