Some brains perform a complicated assessment while others seem to take a shortcut:
Are you a social savant who easily reads people's emotions? Or are you someone who leaves an interaction with an unclear understanding of another person's emotional state?
New UC Berkeley research suggests those differences stem from a fundamental way our brains compute facial and contextual details, potentially explaining why some people are better at reading the room than others — sometimes, much better.
Human brains use information from faces and background context, such as the location or expressions of bystanders, when making sense of a scene and assessing someone's emotional state. If someone's facial expression is clear, but the emotional information in the context is unclear, most people's brains will heavily weigh the clear facial expression and minimize the importance of the background context. Conversely, if a facial expression is ambiguous but the background context provides strong cues of how a person feels, they'll rely more on the context to understand the person's emotions.
Think of it like a close-up photo of a person crying. Without background context, you might assume they're sad. But with context — a wedding altar, perhaps — the meaning shifts significantly.
It adds up to a complex statistical assessment that weighs different cues based on their ambiguity.
But while most people are naturally able to make those judgment calls, Berkeley psychologists say that others seemingly treat every piece of information equally. This discrepancy between complex calculus and simple averages might explain our vast differences in understanding emotions, said Jefferson Ortega, lead author of the study published today (Dec. 16) in Nature Communications.
"We don't know exactly why these differences occur," said Ortega, a psychology Ph.D. student. "But the idea is that some people might use this more simplistic integration strategy because it's less cognitively demanding, or it could also be due to underlying cognitive deficits."
Ortega's team had 944 participants continuously infer the mood of a person in a series of videos. He likened it to a video call: Some of the clips contained hazy backgrounds — like blurring your background in a Zoom meeting. Others had hazy faces and clear context. This allowed his team to isolate the emotional information people get from a person's face and body and the information they get from the context.
Using the participant's scene assessments from those two conditions, Ortega used a model to predict what rating they would provide when they viewed all of the scene details — what he called the "ground truth."
He wanted to know if people really weighed different inputs differently, valuing facial expressions more when backgrounds were blurred or backgrounds when the faces were fuzzy. This process, called Bayesian integration, is a statistical way of understanding whether people combine different types of information based on its ambiguity.
He expected everyone would weigh the ambiguities, decide which field to rely more on, and make an assessment. That was true in about 70% of cases.
However, instead of assessing the context ambiguity, the remaining 30% of participants had more simplistic strategies that basically averaged the two cues.
"It was very surprising," Ortega said, adding that it's less cognitively demanding to take simple averages than to weigh different factors more or less heavily almost instantly. "The computational mechanisms — the algorithm that the brain uses to do that — is not well understood. That's where the motivation came for this paper. It's just an amazing feat."
[...] "Some observers are very good at integrating context and facial expressions to understand emotions," Whitney said of the strong individual differences shown in Ortega's research. "And some folks are not so good at it."
Journal Reference: Ortega, J., Murai, Y. & Whitney, D. Integration of affective cues in context-rich and dynamic scenes varies across individuals. Nat Commun (2025). https://doi.org/10.1038/s41467-025-67466-1
(Score: 5, Insightful) by JoeMerchant on Thursday January 01, @02:21AM
>"...that some people might use this more simplistic integration strategy because it's less cognitively demanding, or it could also be due to underlying cognitive deficits."
Yeah, call it cognitive deficits if you like (you insensitive clod...)
Social intelligence has more dimensions than any of these pseudo-scientists have accurately described, and I'll wager that it varies significantly from one society to the next as well as among members of those societies based on their genetics, neuro-development, health status, arousal, cognitive loading, unconscious interest / disinterest in the subject(s) of study, etc. etc. etc.
There are (likely) more social signals that people process sub-consciously than consciously, and only a sub-set of those have been identified for study and classification.
But, that shouldn't stop hidden camera "studies" from being fed to AI to analyze who perceives what about who in various situations, it should make entertaining reading for the "authors" who put their name on the "studies."
I, for one, had some social-sexual-cue processing speed deficits in my late teens, picking up the signals tragically just a little too late for the desired outcomes (or at least opportunities to further pursue the outcomes) on multiple occasions.
By the time you've answered a questionnaire about the situation, 78 additional potentially important cues have come and gone in the room (assuming there are only a few people in it.)
🌻🌻🌻🌻 [google.com]
(Score: 0) by Anonymous Coward on Thursday January 01, @07:45AM (1 child)
Reading the room reminds me of this experiment:
https://www.youtube.com/watch?v=vJG698U2Mvo [youtube.com]
Also maybe they should add top poker players into their tests.
(Score: 2, Interesting) by Anonymous Coward on Friday January 02, @02:12AM
Actually, I find your linked video very insightful about why I hate electronic touch screens in modern cars.
While paying attention to my finger placement and determining if my attempted interaction had accurately registered, I flat missed the other car!
If you see another "hate post" on new car interfaces, run this link up the pole again. I will salute!