What is AI Bias?

AI can be a force for good in many ways, but it’s not without its problems. One of the main challenges faced by the AI community is how to deal with artificial intelligence bias.

What if the decisions machines make on our behalf are based on incorrect or biased information? Just like humans, machines can be guided by biased opinions and even false information. In this episode of Short and Sweet AI, I discuss different types of bias that can affect AI and the possible solutions.

Listen to this episode of Short and Sweet AI to learn more or keep reading…

AI Ethics

When it comes to AI, the ethics are complicated. Many within the AI community frequently wrestle with ethical questions.

What about the potential threat to human dignity? How will self-driving car liability work? What about accountability? That’s just touching the surface of AI ethics. Other issues include the ethics of weaponizing AI and even the existential risk from superintelligence.

Before diving into the scarier or more extreme AI issues, the one issue front and center is AI bias. AI bias has the potential to affect every aspect of AI, but what is it?

Machines Build with Bias

By their very nature, machines are built with human bias. AI systems are designed by people, and people will always have biases. These might not be intentional, but they can still significantly affect how an AI system makes decisions.

AI is based on algorithms in the form of computer software. Those algorithms power computers to make decisions on our behalf. This is what we call machine learning, and machine learning is all around us.

Machine learning algorithms supply our Netflix suggestions, the posts we see on social media, they deliver the results of Google searches.

For these algorithms to work, they must first be fed data that we supply. For example, for a machine to recognize what a cat is, you must first feed it thousands of cat images. Soon enough, that machine will recognize a cat better and faster than you can!

On a more serious note, these algorithms also make crucial decisions that can have more extreme consequences. For example, computer programs can help police decide where to send resources. They can determine who’s approved for a mortgage, who gets accepted to university, and who gets a job. Machine learning now has a great deal of influence over our daily lives.

In most cases, this works well enough and can save us a great deal of time. However, the problem is when machine learning algorithms are influenced by bias.

More and more experts in the field are sounding the alarm. Machines, just like humans, are guided by data and experience. If that data is mistaken or based on stereotypes, it means the machine will make biased decisions.

Types of AI bias

What types of bias affect artificial intelligence? Experts say that there are three main types: interaction bias, latent bias, and selection bias.

Microsoft’s Failed Chatbot

Interaction bias arises from the bias of users driving the interaction.

An example of interaction bias is Microsoft’s failed chatbot. In 2016, Microsoft created a Twitter-based chatbot called Tay. Tay was designed to learn from its interactions with other users.

Unfortunately, Twitter users tweeted lots of offensive statements at Tay, which trained the chatbot to respond accordingly. Unsurprisingly, Tay’s responses became racist and misogynistic, so Microsoft had to shut it down after 24 hours.

While a failed experiment, it did highlight how clearly user interaction influenced the bot.

Amazon’s Recruiting Bias

Another type of bias is latent bias. This is when an algorithm incorrectly identifies something based on historical data or due to stereotypes.

A well-known example of this occurred with Amazon’s recruiting algorithm. Amazon realized after several years that their program was favoring men when hiring new software developers. This was because when Amazon developed their system, it used a dataset mostly containing resumes from men.

This meant that the algorithm penalized resumes that contained any reference to women. For example, “women’s chess champion.” It would even downgrade an applicant if they had graduated from an all-women’s college.

The program was ultimately abandoned by Amazon. Even once they realized the problem, they struggled to make the program truly gender-neutral.

Selection Bias Ignores the Real Population

Selection bias is the third type of bias. It happens when a dataset overrepresents one particular group and underrepresents another. This means that the AI system does not represent the real population.

When machine learning datasets scrape data from the internet, this is often biased by nature. That’s because it’s using search engines and data developed mostly in the West. This leads to mostly Western-centric search results.

For example, an algorithm might only recognize what a wedding is based on photographs of western-style weddings. It might not recognize wedding traditions from different cultures because there’s little diversity in the data.

Can Big Tech Really Self-Police?

Researchers are just beginning to understand the implications of machine learning bias. Big tech companies are becoming aware of the problems in their systems and have pledged to fix them.

The next question some have is, can big tech companies really self-police?

Some are questioning how impartial big tech companies can be. For example, Google recently fired a high-profile expert employee who they hired to focus on ethical AI. Timnit Gebru became concerned with the problems in the language models that AI uses and raised these issues, only to be fired.  

This raises the point; if ethical AI is to mean anything, surely it has to mean something to the most powerful companies in the world first?

The Power of Diversity

What can we do about algorithms that make decisions that influence our lives at every stage?

Experts say we first need to be aware of the problem. Only by awareness can we make an effort to ensure the datasets used are unbiased.

One suggestion for this is to develop programs that test algorithms to check for bias. A recent study from Columbia University suggested that if people developing AI systems come from diverse backgrounds, there will be less bias.

With data scientists unwittingly injecting their own bias into algorithms, this makes diversity crucial. Diversity means that algorithms will be built for all types of people, not just a small section.

As one headline from The New York Times put it, “We Teach AI Systems Everything, Including Our Bias.” Moving towards more awareness and diversity in AI has to be the way forward to eliminate bias in data.

If you enjoyed this episode and blog post, subscribe to the Short and Sweet AI podcast. Please leave a rating and a review because it shows others this podcast is worth listening to and gives me encouragement. You can subscribe for free on Apple Podcasts, Spotify, and others!

Leave a Reply

%d bloggers like this: