Ever asked a question and been met with a blank stare? It’s awkward enough with a person—but on a humanoid robot, it can be downright unsettling. Now, an international team co-led by Hiroshima University and RIKEN has found a fix: giving androids a “thinking face.” Their study reveals that when robots squint and furrow their brow while processing information, they appear more relatable, easing the eerie discomfort we feel with artificial beings that look almost, but not quite, human—known as the “uncanny valley” effect.
Until now, research on the “uncanny valley” phenomenon has focused mainly on robots’ appearance. But their study, published in the International Journal of Social Robotics, suggests that behavior plays just as big a role. When we humans are contemplating an answer, our faces show it. If an android doesn’t respond in a socially expected way, such as failing to look “thoughtful,” it can feel as unnerving as an unnatural face.
Researchers set out to pinpoint the facial cues people use when deep in thought and test whether mimicking these “thinking faces” could make human-android interactions feel more natural—especially as asking questions is a typical part of engaging with robots. Their findings are critical for the robotics and AI industries, particularly those developing customer service androids, AI assistants, and health care companions.
“We often express psychological states such as happiness or anger through facial expressions. However, previous studies have only examined a limited set of expressions. This study aimed to identify the facial patterns associated with the ‘thinking face’ in humans,” said study corresponding author Shushi Namba, associate professor at Hiroshima University’s (HU) Graduate School of Humanities and Social Sciences.
“We then examined whether applying these ‘thinking’ facial patterns to androids could enhance natural human-robot interaction. Specifically, our research addressed the lack of clarity around the facial movements that convey ‘being in thought’ and tested their effect on human perception when applied to an android.”
Identifying ‘thinking’ facial cues in humans
To identify the facial patterns linked to thinking, the international research team filmed the reactions of 40 young adults, equally split between males and females, as they counted silently for two seconds in their heads (Control Condition) and answered a series of questions ranging from basic math to politics (Thinking Condition). Questions were delivered uniformly using voice-reading software to prevent speech variations from influencing responses. Analyzing the 240 facial videos revealed these five distinct expressions associated with thinking:
- Component 1: Raised chin, subtly tightened lips, lifted inner brow, and downward gaze
- Component 2: Opened mouth
- Component 3: Blinking
- Component 4: Slightly narrowed eyes and furrowed brows
- Component 5: Smiling with cheek raised, raised upper lip, and dimpling
Replicating the ‘furrowed face’ on an android
Of the five, the researchers chose to adopt Component 4, which they called “furrowed face,” as it stood out as the most indicative of deep thought based on past studies exploring how people express and perceive thinking.
They programmed an android named Nikola to replicate this expression and tested how it would fare compared to a furrowed smiling face and neutral face. Eighty-nine crowdsourced workers were asked to watch a video of Nikola displaying these expressions and score each on “genuineness,” “eeriness/human-likeness,” “thinking about the answer,” and “appropriateness.” A hat was placed on the android’s head to conceal exposed mechanical components used for facial movements and make its appearance more natural in the video recordings.
Testing revealed that the furrowed face was rated as the most thoughtful, genuine, human-like, appropriately acting, and less eerie of the three.
Role of expressions in human-robot interactions
Wanting to further prove why expressions matter in robots, the researchers compared people’s reactions to videos featuring either Nikola or a chatbot. Forty Japanese university students were shown eight hypothetical question-and-answer scenarios: four with thinking cues (furrowed face or dots) and four without (neutral face or blank speech bubble).
In the video, the android and chatbot were asked a question about a sushi restaurant or technology’s impact on Japanese politics. After a brief 2.5-second pause, during which either a thinking cue or a neutral stimuli was displayed, the robots responded with “Of course!”

While both thinking cues made participants feel like the android and chatbot were “thinking,” the dots were rated higher for signaling that information was being processed. Meanwhile, they said the furrowed face made Nikola seem more human-like, suggesting that when robots display familiar social norms, they are perceived as more relatable.
“The main takeaway is that when people are ‘thinking,” they would express the furrowed face. The other message is that implementing human-like ‘thinking faces’ in androids enhances perceptions of ‘being in thought,” genuineness, human-likeness, and appropriateness while reducing the uncanny valley effect,” Namba said.
One reason the chatbot was seen as more effective in conveying that it is “thinking” is people’s familiarity with dots as a visual cue that information is being processed. As androids become more common, these findings could help guide the development of robots that adopt similar cues, bringing about more natural and intuitive human-robot interactions.
“The ultimate goal is to develop androids that can engage in more natural and intuitive interactions with humans, reducing the sense of eeriness and increasing their acceptance in social settings,” Namba said.
“To get there, we need to further refine the implementation of thinking faces in androids by considering dynamic aspects of facial movements and investigating their effects in real-time social interactions. Additionally, the study suggests exploring cultural variations in the perception of thinking faces and integrating gaze behavior or other social cues to improve human–robot communication.”
More information:
Shushi Namba et al, How an Android Expresses “Now Loading…”: Examining the Properties of Thinking Faces, International Journal of Social Robotics (2024). DOI: 10.1007/s12369-024-01163-9
Citation:
That ‘uhh… let me think’ face you make? Androids need it too (2025, April 1)
retrieved 1 April 2025
from https://techxplore.com/news/2025-04-uhh-androids.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.