Ethical Issues

Privacy Invasion

This artificial intelligence is an invasion of privacy. Not many people are aware that this technology is out there spying on them and being used to form conclusions on who they are as a person. McStay (2020), argued that this touches the right to human dignity since these AIs are automated processes controlled by higher up companies and governments to turn the human face into another form of measurement and categorization. The data being collected with the use of this technology is being used to make deciding factors on a person. This is an invasion of privacy since these people are being observed without their consent. An example of people that can be greatly affected by this are people who move daily through urban spaces like train stations. As McStay (2020) continues to claim, these people will automatically become identifiable groups and will go through assessments which can lead to being treated differently than other commuters based on the collection of data they have no control of. This means that it will now be possible to single out individuals thanks to this emotion detecting AI. 

This is a big problem because this technology is going to continue upgrading to have multiple uses. One example will be in large scale security applications. This is concerning because as McStay (2020) argues, such emotion based data can lead to big group-based data profiling and group discrimination. Government officials will be easily able to target certain groups of people by using this AI as justification. 

Racial Inequality

This emotion detection AI struggles to identify emotions of people of color. In Lauren Rhue’s research study she explains and proves the ways that this AI fails to evaluate people with darker skin complexion equally to those with lighter skin. As shown in her study when you have a black person in comparison to a white person where they are both smiling the black person always seems to score lower in happiness levels, even in cases where they are smiling greater than the white person. This shows that this AI associates black faces with more negative emotions compared to a white person allowing those in power to correlate those emotions to threatening behaviors and feed into current day stereotypes. In order to be seen as “non aggressive” black people may have to exaggerate and over emphasize their emotions which already puts them at a disadvantage in order to prevent negative detection from the AI. This enforces biases already put in place that harm these communities.