Artificial intelligence that can sense and interact with human emotions will be one of the most important applications of machine learning in the years to come. A company founded by a former Google researcher is developing tools to measure emotions. Affectiva, the MIT Media Lab spinoff that developed the SoundNet neural network, was acquired by Smart Eyes. Real-time analysis of emotions and engagement during a virtual meeting is soon to be provided by the new feature on the video platform.

Tech companies will be releasing advanced chatbot that can mimic human emotions in order to create more empathetic connections with users. Average users in China are said to have conversed with Xiaoice more than 60 times in a single month. Users failed to recognize it as a bot during the Turing test. The analysis shows that the number of chatbot interactions in health care will increase by almost 167 percent over the course of the next five years. Around $3.7 billion will be saved for health care systems around the world because of this.

There will be emotional artificial intelligence in schools in 2023. In Hong Kong, some secondary schools already use an artificial intelligence program that measures micro-movements of muscles on the students face and identifies a range of negative and positive emotions. The system allows teachers to make interventions if a student is losing interest.

There is a problem with the majority of emotional artificial intelligence. When trained on large and diverse data sets, emotional artificial intelligence reduces facial expressions to an emotion without taking into account the social and cultural context of the person. It is not always possible to deduce the reason and meaning behind a person's tears. A scowling face is not necessarily indicative of an angry person. What's the reason? Our expressions are not always a reflection of our inner states because we adapt our emotional displays according to our social and cultural norm. People often do emotional work to hide their real emotions, and how they express them is likely to be a learned response. Women tend to modify their emotions more than men because they are expected to be angry.

Artificial intelligence technologies that make assumptions about emotional states will likely make inequalities worse. The harmful impact of the gendering of technology was shown in a UNESCO report.

Racial inequalities can be perpetuated by facial recognition. Black players were shown to have more negative emotions than their white counterparts, even when they were smiling, in analysis from 400 NBA games with two popular emotion-recognition software programs. Black men are stereotyped as aggressive and threatening in the workplace because of this.

If left unaddressed, emotional artificial intelligence technologies will reinforce racial and gender biases, reinforce inequalities in the world, and further disadvantage those who are already marginalized.