Is facial expression the only cue we take in when reading the state of a human being? According to a study by the University of Berkeley in California, it isn’t. When it comes to reading emotions, it turns out there are many other factors, including visual context, background and action. The findings will appear later this week in the Proceedings of the National Academy of Sciences.
This study poses something of a challenge for the decades of research that has taken place before it that focused on facial expression to read emotions.
“Our study reveals that the recognition of emotions is, in the end, a matter of context as much as of faces,” says lead author Zhimin Chen, a PhD student in psychology at the Californian university. To do this study, the researchers blurred out the faces and bodies of actors in many different movies and home videos, and asked participants to read their emotions given how they were interacting with their environment.
The study was able to gather a lot of data in a short period of time. Eventually, this could be used to help interpret emotions from people with disorders such as autism and schizophrenia.
“Some people may have deficiencies in recognizing facial expressions, but they are able to recognize emotions within a context,” Chen said. “Right now, companies are developing machine learning algorithms to recognize emotions, but they only train their models in cropped faces and based on facial expression. According to this new work, these ways of studying moods would give totally inaccurate results.”
1954 Quiet Valley Lane, Van Nuys CA 91405