Humans are incredibly skilled at identifying the emotions in a conversation. We can “hear” a smile. And we correctly identify emotions in a voice even when we don’t speak the language. In fact more than 50 categories exist within the human emotions of surprise, joy, anger, sadness and fear. And each is conveyed through body language, words or tone. When you recognize these signals and respond appropriately, you have high emotional intelligence or high EQ.
AI has high IQ but low EQ
We know emotional intelligence and social skills correlate with a person’s potential for success in life. On the other hand, we live in a high IQ world surrounded with super advanced technology and AI systems developed to help us. But they have absolutely no EQ, no emotional intelligence. We need to build emotionally intelligent machines that truly understand human needs so we can have successful interactions with them.
Give machines emotions
The idea of making emotionally intelligent AI has been around for a long time. In 1997 an MIT Media lab professor, Rosalind Picard, published a book about computers and emotions entitled “Affective Computing”. Affect is a psychology term and refers to feeling, emotion, or mood.
Picard is credited with starting the field of computer science known as affective computing. It’s also called emotional artificial intelligence or emotion AI. Her book outlined how to give machines the skills of emotional intelligence so they can be genuinely intelligent and interact with us naturally. She believes computers should have the ability to recognize, understand, to even have and express, emotions. And by the way, this sounds very similar to what Ray Kurzweil has predicted in some of his conversations about the future.
The need for emotion datasets
In 2009 Picard and Rana el Kaliouby, a computer scientist from MIT, started an AI company called Affectiva based on emotion recognition technology. Subsequently, the company created a dataset of 7.9 million faces from 87 countries with recorded expressions for just about every human emotion. Above all, Picard and Kaliouby wanted to avoid biases in Affectiva’s algorithms. They therefore used a diversity of faces to pick up the differences in expressions from all ethnic groups, ages, genders and cultural backgrounds. Incidentally, I talked about the bias in large datasets in a previous flash talk on ImageNet.
Today Affectiva’s algorithms can detect human emotion from facial expressions and vocal cues. But even more, Kaliouby wants to train machines to recognize the subtle nuances in human emotions. Humans use a lot of nonverbal cues. Gestures, body language, voice tone all contribute to how emotions are communicated. For that reason researchers plan to develop emotion AI that is multimodal and can detect emotion the way humans do from multiple channels. Ultimately, Kaliouby wants to fuse digital technology with an ability to understand the humans using it.
The application of emotion AI
The power to detect human emotion has implications for every aspect of society. Emotion AI technology can detect mental and physical ailments based on how patients look or sound. In marketing it determines consumer’s reactions to commercials and TV shows. In the automotive world, emotion AI can identify distractions going on inside the car that could affect safety, such as arguments or a driver’s lack of focus. Finally, the biggest role so far has been in customer service. Call centers are already using emotion AI to identify the mood of customers on the phone.
In the next flash talk, I discuss how emotional cues become part of a machine’s emotional intelligence and why there are worries.
As always further reading, videos, and podcasts are in the shownote.
From Short and Sweet AI, I’m Dr. Peper.