Why is bias in artificial intelligence so important? Many people don’t realize that the algorithms used in AI today have a great impact on our daily life. These software programs decide whether we’re invited to a job interview, or eligible for a mortgage or undersurveillance by law enforcement. Organizations make these decisions with algorithms trained using datasets. If the datasets only reflect a few groups such as college educated people or people from certain socioeconomic backgrounds then the decisions will be biased.

bias in = bias out

The researchers who developed the datasets did not make the AI systems this way on purpose or out of malice. It was more unintentional and unconscious. People who create the algorithms have their own experiences and blindspots and the data reflects this. And you have more bias in the algorithms when the AI teammembers who program the computers are not a diverse group.

A quick example of bias in artificial intelligence is voice assistants like Alexa that’ve been trained on huge datasets of recorded speech from white, upper-middle-class, Americans. As a result the technology doesn’t understand commands from people with different accents and expressions.

ImageNet Roulette

In September 2019 a program called ImageNet Roulette caused a Twitter storm. People uploaded their selfies to the online program which used ImageNet to create labels. The labels attached to the selfies ranged from benign things like “face’ or “a person of no influence” to more troubling labels such as “first offender” and “rape suspect”. The project showed the dangers of feeding flawed data into an AI algorithm.

ImageNet is a 15 million image dataset that unlocked the potential of deep learning, a type of artificial intelligence used for everything from facial recognition to self-driving cars. This massive dataset is routinely used to train deep learning algorithms. But ImageNet Roulette’s creators wanted to crack ImageNet open and to show how biased the images are. And how the flawed dataset can lead to many flawed algorithms. As a result, a massive effort was launched to remove the most offensive labels and make the images more diverse.

AI Needs Diversity

Fei Fei Li, the computer vision expert who created ImageNet, has become a champion to make AI less biased and better for humanity . She left Google to lead Stanford’s new Institute for Human Centered AI. She’s testified before congressional hearings about the need to make changes to ensure there are diverse people engineering AI. And she’s founded AI4All, a summer program for girls in high school to develop more diversity in artificial intelligence.  

Ten years after the launch of ImageNet, Li believes AI research needs to include people in neuroscience, psychology, philosophy and other disciplines to create AI with more human sensitivity. As she has said: “There is nothing artificial about AI. It’s inspired by people. It’s created by people and most importantly, it impacts people. It is a powerful tool we are only just beginning to understand, and that is a profound responsibility.”

As always, links to further reading, videos and podcasts are in the shownotes.

From Short and Sweet AI, I’m Dr. Peper

https://www.wired.com/story/ai-biased-how-scientists-trying-fix/

https://www.scmp.com/magazines/post-magazine/long-reads/article/2183463/bias-bias-out-stanford-scientist-out-make-ai-less

https://www.excavating.ai/

https://www.wired.com/story/fei-fei-li-artificial-intelligence-humanity/

Leave a Reply