Explaining to a child how to cross the street in front of their school without being hit by a car only takes a few repetitions and their knowledge can then be generalised to most roads and vehicles.
It would instead require huge quantities of images for an AI to learn the same and it would make mistakes as soon as confronted with situations which are slightly different from what it had seen in his training data set.
The current breed of artificial intelligence – in its most advanced version – is built upon a metaphor of the human brain as a computer made of interwoven neurons. Through a ‘training’ process, the system can ‘learn’ to ‘recognise’ identical patterns without being programmed by a human and then apply this ‘knowledge’ to real world situations, more and more with a better accuracy than humans themselves.
The limit of this metaphor is that it takes a huge quantity of data to obtain this type of result and those hard-learned skills are confined to the very domain where the AI was trained.
The abstraction and generalisation capabilities of humans are still a mystery to AI researchers, but an element that may guide them in their quest is the emotional nature of human beings. We memorise much better when feeling strong emotions than in ‘boring’ situations. Children’s ability to quickly learn how to properly cross the street is certainly related to their feeling of danger and somehow fear of what could happen if they made the wrong decision.
A machine obviously doesn’t feel – we’ll leave to the sci-fi fans the debate of whether consciousness could emerge as a property of complex systems such as neural networks. AI is high on IQ and low on EQ some might say. But progress in mimicking the functioning of the human brain could require an acknowledgement and a modelling of the emotional nature of homo sapiens.
Current AI algorithms are not yet able to learn from less data and improve their abstraction and generalisation capabilities using emotions. But they are improving at recognising them within humans, exploring correlations between symbolic representations of emotions and human expressions, whatever their format.
Progress being made
Some research has already be done on the range of human emotions, thanks to the EU-Emotion Stimulus Set, and people like Houwei Cao, assistant professor in the Department of Computer Science at New York Institute of Technology, who is busy working on algorithms that can read emotions.
Initial efforts were called ‘sentiment analysis’, trying to guess an individual’s state of mind based on what they write or say. This has now taken a larger perspective by adding language patterns, voice tone, facial movements, sentence structures, and eye motions into the mix.
For instance, a mouth shaped in a particular way, plus voice with a specific pitch compared to its baseline, plus use of words tagged as being positive, equals happiness. Of course, to the average philosopher, that is a rather partial and limitative definition of happiness. But it only needs to be operational in the specific context where it is used.
Emotional AI applied to customer engagement
Indeed, those efforts are improving AI’s relevance to the business world and the fields of application are numerous.
Whether it’s customer engagement or support, a hiring process, or addressing disputes, emotional AI can play an important and useful role for humans. Employees can base their interactions on its insights, adapt their response to emotional changes in the customer and have a more effective communication with the person on the other side of the line or table.
For instance, the stakes are high for the call centre industry: born out of financial necessity so businesses can afford to serve and support large customer bases, it often turns out to be a source of frustration for users despite well-scripted conversation scenarios followed by the responding agent. When there’s pressure, good manners and empathy can be forgotten. Emotional AI can act as a reminder to employees, so it doesn’t happen.
It is also true of the sales forces whose likelihood to convert a prospect into a customer is directly linked to their ability to empathise with the individual(s) they want to strike a deal with. Indeed, approaching another human with an offering that is rational (adapted to its needs and budget for instance) but presented without taking into account their current state of mind is at best a waste of time and at worst a loss opportunity.
Emotional AI can help a business stand-out from its competitors for the quality of its customer engagement. But what will be the acceptance of emotion-driven algorithms by humans?
There will be challenges
In the age of GDPR and stringent privacy rules, considerations about voice, face, and writing being processed by emotional AI algorithms is something that businesses will need to explain to customers, since there is a very thin line between individual mood monitoring and intrusive Orwellian surveillance.
Will a customer value consideration for his or her feelings or mood by a computer as much as genuine empathy expressed by another human-being? If after asking how I am doing – something most people won’t need an AI to remind them to ask – the next question about my latest holiday is in fact an AI-scripted line, the whole introduction might sound a bit phony.
Eventually, could overly relying on AI to read other individuals state of mind turn us all into sociopaths unable to properly relate to other humans, like GPS has slowly but surely decreased our ability to use a map to navigate in the real world?
However, those questions might be irrelevant in the not-so-distant future. With the growing sophistication of virtual personal assistants – think Alexa, Siri or Google Home – we may soon delegate our buying decisions to those machines. This would imply that vendors’ own AI systems now have to pitch our AI agents instead of ourselves. And the billions spent annually by marketing departments on branding and ads designed to appeal to our emotions would fall flat.