EXECUTIVE SUMMARY:

Uncovering and understanding customer emotions in real-time is thrilling, and theoretically, extremely useful. Modern AI tools can interpret facial expressions, understand vocal patterns, track eye-movements and more. As you well know, all this data will supposedly help companies better target their markets.

Yet, are organizations that implement AI to interpret human emotions really gathering accurate data?

If humans can’t always discern someone’s thoughts and motives, then how can we expect AI –a man-made data analytics processing tool- to beat us at this game?

For a long time, we’ve known that artificial intelligence contains built-in biases towards certain demographics. Citizens and politicians alike have proposed that the US congress create legislation around the use of these biased tools, especially since police departments around the country have been quick to purchase time.

Studies have shown that facial recognition AI is significantly less accurate in identifying people of color (from African Americans, to Native Americans, to Asians), than in identifying Caucasians.

As the Harvard Business Review writes, “In short, if left unaddressed, conscious or unconscious emotional bias can perpetuate stereotypes and assumptions at an unprecedented scale.”

Given the shortcomings of this technology in its current state, conscientious organizations may wish to avoid it.

To eliminate bias from AI, development teams need to be diverse, as do the training sets that AI ‘learns’ from. “Failure to act will leave certain groups systematically more misunderstood than ever—a far cry from the promises offered by emotional AI.”

For more on this story, visit The Harvard Business Review.