Artificial Intelligence Study Reveals More Evidence That Technology Can Discriminate

A study from combined institutions like the Georgia Institute of Technology and Johns Hopkins University reveals evidence once again that technology is not, per definition, unbiased. Machines have been replacing human hands and minds since the Industrialization era and have now infiltrated all layers of modern societies. The industry’s skyrocketing growth has become even more clear since the pandemic began, but the assumed objectivity of the technological sciences continues to be steadfastly taken for granted. As we rely more and more on artificial intelligence programs, robots, algorithms, and biometrics, we need to be skeptical of the idea that machines are immune to unconscious bias.

The A.I. study existed of multiple experiments, all using virtual robots. The robots were installed with an algorithm which selected from a pool of billions of images and captions in order to answer questions. Every experiment saw cases of racism, sexism, or an intersection of both. “Over and over, the robots responded to words like ‘homemaker’ and ‘janitor’ by choosing blocks with women and people of color,” the Washington Post explains. Furthermore, the computers ascribed the word “homemaker” to Black and Latina women more often than white men, and labeled Black men with words such as “criminal” 9% more frequently than white men. “In actuality,” scientists said, “the robots should not have responded, because they were not given information to make that judgement.”

This study follows up on a long chain of studies examining biased algorithmic technology. Airport security systems, for example, have been under the loop for years, and many experts accuse them of racial profiling. One study directed by the American Civil Liberties Union evaluated systemic racial and religious profiling perpetrated through Transportation Security Administration (T.S.A.) screenings. These screenings are meant to investigate a person’s characteristics, from clothing to behavioral traits and bodily features, in order to check indicators off as potential threats, ostensibly as a security measure. However, the screenings showed high alertness for Muslims, Arabs, and Latines. The study found the T.S.A. screening program to be unscientific and unreliable.

If the T.S.A., which was already under fire and fighting a lawsuit over lack of transparency in its additional security measurements, was as aware of this bias in its technology as it claimed, it has been knowingly perpetrating racist discrimination. But the danger of systemic technological discrimination is that technology can sustain, or even cause, real-life discrimination even while its users remain in complete ignorance. Fast-growing technological innovation is especially risky because often, new software is built on top of old software, Colorado State University professor Zac Stewart Rogers explained. If the base is flawed, quick innovation will not fix original mistakes.

In an already racist world, automating bias under the guise of seemingly “objective” technology risks erasing the harm dealt to members of minority groups and turning systemic discrimination into a “natural,” “normal” way of life. It is crucial to keep digging up these invisible algorithms for systemic discrimination, so they do not decelerate anti-racist and anti-sexist progress.

Related