Can computer systems ever develop emotional intelligence? Source: Shutterstock

Can computer systems ever develop emotional intelligence? Source: Shutterstock

Can the EQ gap between chatbots and humans be narrowed?

BUSINESSES have fallen in love with chatbots over the past few months. They’re affordable, easy to deploy, and can really help boost the experience by providing round-the-clock support to customers.

Accounting firms, banks, convenience stores and everyone else seems to have jumped onto the bandwagon and are dazzled by the results (in terms of improvements to their net promoter scores).

And while customers can tell when they’re interacting with chatbots today, experts believe that distinguishing between human agents and chatbots will be nearly impossible in the near future.

According to a recent thought leadership piece published by INSEAD, as time and technology march on, the possibility of “emotional AI” is becoming less distant.

The question, therefore, is this: How can chatbots begin to understand human emotions better? The answer — simply put — is to use audio and video instead of “text only”.

Just as humans often misunderstand each other when communicating via text, it’s hard to expect algorithms understand and gauge what customers and users feel based on text messages.

“Thanks for your help” might be an expression of gratitude just as it might be sarcasm for the chatbot’s inability to sympathize and provide adequate support to the user.

Audio and video make emotions more digitally accessible

Theoretically speaking, when chatbots get access to audio or video, they’ll be far more effective and able to understand emotions far better.

Experts believe that analyzing facial reactions and tones in people’s voices, with plenty of training, audio-visual chatbots will be able to differentiate between gratitude, sarcasm, and myriad other emotions — enabling them to better serve customers.

Having said this, the next logical question that comes to mind is whether or not it’s possible to create audio-visual chatbots. Again, theoretically, the answer is yes.

Amazon’s smart assistant Alexa can already speak in celebrity voices including Gordon Ramsay, Rebel Wilson, and Cardi B, among others; and we’ve seen how Barack Obama’s video was “naturally” manipulated via AI — a capability that Adobe has recently expanded on along with DARPA.

Truth be told, given the advances in technology — an audio-visual chatbot that can empathize with users and provide a delightful experience seems like the logical next step in a world where digital-first, on-demand solutions are on the rise.