Next Steps

In order for this AI to become effective and a little more accurate, more data on how emotions look everywhere around the world is needed. We mustn’t only study facial expressions and how they correspond to emotions from only the U.S but from many other countries. As stated in The Guardian article by Oscar Schwartz, “This is why Affectiva collects data from 87 countries. Through this process, they have noticed that in different countries, emotional expression seems to take on different intensities and nuances. Brazilians, for example, use broad and long smiles to convey happiness, Kaliouby says, while in Japan there is a smile that does not indicate happiness, but politeness.” This method of data collection is the best way to make some progress with this technology since the illustrating of emotions differs depending on where you’re from and how you are raised. Adding this extra layer of analysis into the system allows more different representations to be involved in the process making it possible for emotion detection to be more accurate even within different ethnic groups.  

Ethical Issues

Privacy Invasion

This artificial intelligence is an invasion of privacy. Not many people are aware that this technology is out there spying on them and being used to form conclusions on who they are as a person. McStay (2020), argued that this touches the right to human dignity since these AIs are automated processes controlled by higher up companies and governments to turn the human face into another form of measurement and categorization. The data being collected with the use of this technology is being used to make deciding factors on a person. This is an invasion of privacy since these people are being observed without their consent. An example of people that can be greatly affected by this are people who move daily through urban spaces like train stations. As McStay (2020) continues to claim, these people will automatically become identifiable groups and will go through assessments which can lead to being treated differently than other commuters based on the collection of data they have no control of. This means that it will now be possible to single out individuals thanks to this emotion detecting AI. 

This is a big problem because this technology is going to continue upgrading to have multiple uses. One example will be in large scale security applications. This is concerning because as McStay (2020) argues, such emotion based data can lead to big group-based data profiling and group discrimination. Government officials will be easily able to target certain groups of people by using this AI as justification. 

Racial Inequality

This emotion detection AI struggles to identify emotions of people of color. In Lauren Rhue’s research study she explains and proves the ways that this AI fails to evaluate people with darker skin complexion equally to those with lighter skin. As shown in her study when you have a black person in comparison to a white person where they are both smiling the black person always seems to score lower in happiness levels, even in cases where they are smiling greater than the white person. This shows that this AI associates black faces with more negative emotions compared to a white person allowing those in power to correlate those emotions to threatening behaviors and feed into current day stereotypes. In order to be seen as “non aggressive” black people may have to exaggerate and over emphasize their emotions which already puts them at a disadvantage in order to prevent negative detection from the AI. This enforces biases already put in place that harm these communities.

Different Applications

This artificial intelligence has developed from in house research facilities to wearables, human-robot interaction, education, retail, employee behavior, border control, and online and physical contexts (McStay 2020). It is believed that with these technologies it is possible to use emotional states to enhance our interactions with different devices and even places.

Improve Shopping Experiences

Emotion detection AI has been greatly used to provide emotional reactivity feedback to marketers so that they can know what to advertise and how to advertise it. By using emotion detecting AI marketers are able to analyze and store data about where customers have shown the most signs of happiness compared to other places in a location and use that to make shopping a better experience for people. A successful use of this in a research conducted by Gonchigiav (2020). In this research, Gonchigiav (2020), argued that the use of emotion reading AI can be used to improve sales in supermarkets by observing how people seem to feel depending on what aisle they are on or what they are looking at.

Thanks to emotion reading AI, Gonchigiav was able to put together and figure out what kinds of techniques to use in order to positively stimulate customers’ interest and emotions. In this case emotion reading AI has been a very helpful tool in verifying the effectiveness of certain store marketing strategies within departments in stores and has helped improve the experiences of going grocery shopping and how to increase sales. 

Assistive Technology for Autism 

Emotion detection AI has been used as an assistive technology for people and children with disabilities, specifically autism spectrum disorder. This is a common disorder that harms the ability to interact and communicate with others. This makes it difficult for them to connect facial features to emotions and respond appropriately to other people’s emotions. This ability to sympathize and feel empathy for other people is a skill that people with ASD do not possess. They may use the wrong emotions for the wrong situations as stated in a study by Akansha Singh and Surbhi Dewan called, “AutisMitr: Emotion Recognition Assistive Tool for Autistic Children,” where they stated that “They may show signs of joy when someone is hurt, or they respond with no emotions whatsoever. Thus, the inability to respond appropriately to others’ emotions may create appearances that autistic people don’t feel emotions.” This idea that people with Autism don’t feel emotions is entirely wrong and can cause very harmful beliefs to arise since they actually do but just can’t express them correctly.

This is where a mix of facial and emotion detection AI come into play. This AI not only identifies faces in a picture but also measures the types of emotions displayed based on facial features. This, as shown in the study, has been used in multiple videos and games to teach children with autism about the different kinds of emotions and at the same time analyze their progress. They do this by checking if after the videos and games they are more capable of portraying their feelings to the corresponding emotions that are appropriate in specific occasions better than they did before. This assistive technology allowed people with ASD to perform daily life activities and made understanding emotions a little less difficult for them.

Provide Video Game Feedback

This AI has been used to detect the types of emotions that video games spark for those playing them. It is stated that with the use of this AI we are able to observe which emotions a person is experiencing in real time as they are playing. This is important since every video game tries to provoke a specific set of emotions and behavior from players. This means that game developers use these technologies to test whether or not users demonstrate the corresponding emotions that they are aiming for. They do this during the games testing phase where people are asked to play the game for a specific amount of time. During this time they are being monitored for the emotions being portrayed and with this feedback they are able to make adjustments to the game to fit their preference. 

Pain Assessment

It is also believed that this technology can also be used in hospitals as a way to assess pain. The idea behind this is for people who are either unable to demonstrate when they are in pain or those too shy to say anything. With this technology people believe that they will be able to identify the exact moments when someone feels uncomfortable during any medical procedures and modify practices from there in favor of the patient. In other words, lead professionals to turn to another method of treating someone, a method that works more to their liking. If they are treating someone and see that they demonstrate high levels of uneasiness then they will know that they are feeling irritation. 

On the other hand, a big problem with this is the possibility of this AI being completely wrong and inaccurate. This will cause major issues and misunderstandings in communication between the patient and doctor. It may cause unnecessary actions to be taken and waste both of the their time.

School Application

This AI has also been used to supervise schools to try to enable learning in students and facilitate and regulate behaviors and moods. The AI would be used to see if students were performing suspicious behaviors on video surveillance. It was believed that with this technology adults in the school will be more aware of how their students feel and check in on them and make sure they are emotionally stable. The use of this AI was also put in place in schools as a way to make sure students were focused and engaged in the lesson. 

Jordan Harrod, the speaker in a youtube video called, “Can AI detect your emotion?”, explained how the usage of emotion detection AI in schools is a very bad idea. She talks about how these AI’s are being used to see if students are performing suspicious behavior. Recently proctorU stopped using these AI’s since they had concerns that teachers weren’t actually reviewing the flagged footage but instead just using it to unfairly penalize students. This means that the use of this AI in a classroom just led to difficulties for students to learn in class and added more stress to their daily lives in school and gave teachers/administrators the ability to take this technology for granted and use it to target students.

Workplace Usage

It is believed that by having this technology in work settings that we are able to judge risks, facilitate and regulate behaviors and moods and even improve performance in people (McStay, 2020). Similarly, for employees these machines make sure that the workers are having positive attitudes throughout the day and are working efficiently. What employers do with this data is something they have full control over. Most of the time these people have no clue that they are being monitored for emotion. Employers do this as a way to track worker productivity. This method of using this technology has also been used to make decisions regarding whether or not employees deserve raises or promotions

This technology is being used even before you are employed in a job. It is being used throughout virtual interviews where employers analyze the interview footage and evaluate potential job candidates. This then leads to decision making on whether or not someone gets employed. According to the Atlantic article, “Artificial Intelligence is Misreading Human Emotion” they state, “In 2014, the company launched its AI system to extract micro-expressions, tone of voice, and other variables from video job interviews, which is used to compare job applicants against a company’s top performers.” This means that these applicants are set side by side to those already employed and are evaluated using this technology to see if they have similar attitudes. Job applicants are judged unfairly based on the emotion they portray through their facial expressions or vocal cords. They are compared to higher performing employees to check if they have similar potential. If they fail to portray similar attitudes to current employees then they are automatically seen as unqualified for the job. 

Using this AI to judge someone’s working ability is very unethical and not fair. This AI causes people to lose job opportunities based on unrealistic and inaccurate data. AI this way is very harmful to people who are just simply trying to get jobs. Basing whether or not someone gets a job for how their face looks 24/7 makes no sense since portraying emotion is something that naturally happens.

Not only is it being used to monitor workers but customers as well. This AI is being trusted with the task to detect potential shoplifters by tracking and analyzing facial cues. Customers get stopped in stores and questioned for suspicious behaviors based on what this technology detects.

“These are the people who will bear the costs of systems that are not just technically imperfect, but based on questionable methodologies.”

– The Atlantic Article author, Kate Crawford

“There is no good evidence that facial expressions reveal a persons feelings. But big tech companies want you to believe otherwise.”

– Kate Crawford from the Atlantic 

BTS of Emotion Detection AI

The first step in emotion detection AI is to actually take the image frame from a camera feed and detect the human face. In this process the face is located and highlighted with a box. In this little box coordinates are used to pinpoint the exact face location in real time. This is where facial recognition AI comes into play. However, sometimes this step fails if the image or video is not clear enough to detect a face. This can be due to difficult lighting conditions, unusual head positions, distance, and other obstructions.

The next step is to crop, resize, and rotate as much as necessary. This step is also known as preprocessing the image. After the face is detected the picture is enhanced in order to guarantee “accurate” emotion detection. Some changes that the picture might go through are image smoothing and even color corrections. This makes it possible to improve the chances of getting correct analysis.

After preparing the image it is finally ready to have emotions be extracted from them. Facial features are then deeply analyzed in order to be categorized into the 7 possible emotions such as happiness, sadness, fear, disgust, surprise, anger, or neutral. This is done by using these AIs to study the “motion of facial landmarks, distances between facial landmarks, gradient features, facial texture, and more.” This has to do with the separate classification of muscle contraction that happens in your face when having a positive or negative response to something. 

However, before this classification process is possible these algorithms are trained to recognize previous things they’ve seen before. In order for these algorithms to make connections to what a face might look like when portraying a specific emotion they must have prior knowledge of what these emotions actually look like. For example, if the AI is shown many different pictures of what a “happy face” looks like then in the future it is able to distinguish a happy face when shown one. This is how these AIs are able to identify and label what emotion is portrayed accordingly by matching them to what it’s seen before.  

“Emotion detection technology requires two techniques: computer vision, to precisely identify facial expressions, and machine learning algorithms to analyze and interpret the emotional content of those facial features.”

– Oscar Schwartz from The Guardian article

Emotion Detection AI

What is Emotion detection AI?

Artificial Intelligence is the simulation of human intelligence shown by machines and technology. As time goes by this Artificial Intelligence continues to progress and improve into different forms of technology and a prime example of this is emotion detection AI. Emotion detection AI is the use of artificial intelligence to detect the mental state of an individual. More specifically it is able to pinpoint and identify emotions or feelings portrayed in a human through the observation of facial expressions. This can be done either through a video, live feed, or even picture. They are able to go frame by frame to view the changes in facial expressions and events leading up to that change to help accurately “read” emotions. This AI does this with the help of observing and measuring a human’s facial movements and expressions and connecting that to an idea developed long ago. 

This idea that facial expressions equal emotion is an idea developed in the 1960’s thanks to Paul Ekman. He proposed the idea that facial expressions are universal and defined five basic emotions which are anger, happiness, neutral, sad, and surprised. He also believed that micro expressions alone can tell you a lot about a person. With this concept future emotion based technologies, such as emotion detection AI, have been made to only detect these specific emotions by simply analyzing a face. However, the possibility that this method works is not always guaranteed. 

Accuracy of Emotion Detection AI

The idea that emotions can be predicted solely out of someone’s facial expressions is completely false. Someone’s facial expression is not always what someone feels on the inside. It takes more than just a face to be able to correctly infer a person’s emotion, especially since a lot of people display their emotions differently. As stated in a study by Andrew McStay, a very important reason why this method of emotion recognition is inaccurate is because similar configurations of facial movements are expressed in more than one emotion category. This means that movements like raised eyebrows and squinting eyes can be matched to more than one emotion. For example, stating someone who has their eyes squinting and low arched eyebrows is “angry” is entirely wrong since even though they are portraying an expression that we believe belongs to an “angry” expression, we as outsiders wouldn’t actually know what this person is truly feeling.

This idea that certain facial expressions or movements belong to certain emotions is called an emotional stereotype, which is currently the main problem with these technologies. In order to accurately predict emotions, more context on specific situations is crucial and necessary. Furthermore methods may have to become more invasive since background information for what’s going on during the specific time span that an emotion is being analyzed is needed.

“How people communicate anger, disgust, fear, happiness, sadness, and surprise varies substantially across cultures, situations, and even across people within a single situation.” 

– a team at Northeastern University and Massachusetts General Hospital

Future of This AI

These AIs are being improved on a daily basis to have a bigger role in our daily lives. As stated before sooner or later these AIs will be heavily active in security systems and cameras. As a result these technologies will have great significance in our justice system. This means that this AI can potentially make it possible to easily target and discriminate against people of color. It will be used to popularize racial profiling and make it so that they have a reason to target certain people. They will use data collected by this AI to try to label someone as a threat or suspicious. It will also lead to finger pointing whenever an inconvenience happens. This means that for any complication that occurs people will be easily able to point the blame at someone else just by using this technology as back up. The way someone presents themselves physically will be used to make significant conclusions against them for something they do naturally. It’s no longer a technology that just detects emotion, it’s a technology that is hurting lives and hurting futures.

Getting suspected because of inferences made by an AI can ruin a person’s entire future  since you can potentially even get a criminal record because of it. Being taken into custody because of a machine detecting suspicious behaviors from someone can lead to permanent repercussions. This can lead to further issues in employment since many jobs will turn you down or make it difficult to employ you for having a criminal record.