Why Your Brain and Mine Agree on What We See

Summary: A new study reveals how uniquely wired human brains can perceive the world in strikingly similar ways. Researchers recorded live neural activity in epilepsy patients and found that while each person’s neurons respond differently to the same image, the relationships between those neural responses remain consistent across individuals.

This shared relational pattern allows different brains to interpret the same scene — like a dog running on the beach — in similar ways. The findings shed light on the universal structure of perception and may help refine artificial intelligence models inspired by human cognition.

Key Facts:

  • Unique Yet Alike: Each person’s neurons fire differently, but the relationships between their activity patterns are consistent across people.
  • Shared Perception: This common relational code explains how humans interpret the world similarly despite individual neural wiring.
  • AI Implications: Understanding how the brain organizes perception could improve artificial neural networks and machine learning.

Source: Reichman Institute

How is it that we all see the world in a similar way?

Imagine sitting with a friend in a café, both of you looking at a phone screen displaying a dog running along the beach.

Although each of our  brains is a world unto itself, made up of billions of neurons with completely different connections and unique activity patterns, you would both describe it as: “A dog on the beach.” How can two such different brains lead to the same perception of the world?

A joint research team from Reichman University and the Weizmann Institute of Science investigated how people with different wired brains can still perceive the world in strikingly similar ways. E

very image we see and every sound we hear is encoded in the brain through the activation of tiny processing units called neurons – nerve cells that are ten times smaller than a human hair.

The human brain contains 85 billion interconnecting neurons that enable us to experience the world, think, and respond to it.

The question that has intrigued brain researchers for years is how this encoding is performed, and how it is possible for two people to have completely different neural codes, yet, end-up with  similar perceptions?

The research team, led by Reichman University graduate student Ofer Lipman and supervised by Prof. Rafi Malach and Dr. Shany Grossman from the Weizmann Institute and Prof. Doron Friedman and Prof. Yacov Hel-Or from Reichman University, set out to observe how brain neurons encode information in real time.

This is a most challenging task as most brain-imaging methods provide only a low resolution picture, similar to a satellite photo of a city where you can see the highways but not the people on the streets.

To overcome this challenge, the researchers drew on a unique source of data: epilepsy patients with electrodes implanted in their brains for medical purposes. While the implants were placed to help doctors locate the epicenter of the patients’ seizures, they also offered the researchers a rare window into the activity of brain neurons  – recorded live and not simulated or inferred – while the patients viewed images.

The researcher team discovered that, just as in artificial neural networks (the technology behind AI), the raw patterns of activity in the human brain differs from person to person.

When observing a cat, the neurons that “light up” (are active) in one person’s brain may be different neurons in another person’s brain.  

But here is the surprising finding: When the researchers shifted from examining the raw activity of neurons to observing the relationships between the general activity patterns in the neurons – i.e. how strongly the brain responds overall to a cat versus a dog – they discovered a common relational structure across all participants.

For example, if one brain’s general activity in response to a cat is more similar to its response to a dog than to, say, an elephant, that same relationship is likely to hold in all other brains. In other words, the actual activity patterns in different brains may not be identical, but the relationship between them is preserved.

This relational representation may be the brain’s way of organizing information so that all humans can understand the world in a similar way, even when the underlying neural coding differs.

“This study brings us one step closer to deciphering the brain’s ‘representational code’ – the language in which our brains store and organize Information,” explains Lipman.

“This understanding helps advance not only neuroscience, but also AI: insights into how the brain represents information can inspire the design of more efficient and intelligent artificial networks, and vice versa  – artificial networks can generate insights that deepen our knowledge of the brain.

“This study forms a part of a broad series of works in which researchers compare the representation of information in natural networks (the human brain) with the representation of information in artificial networks (AI). This integration opens the door to a richer understanding of ourselves and the systems we build.”

So, the next time you see a dog running on the beach and you think “a dog,” remember that behind this simple thought lies a vast and complex code that science is only beginning to crack.

Key Questions Answered:

Q: How can two different brains see the same thing in the same way?

A: Although neurons differ across individuals, the relationships between neural responses to objects—like how the brain reacts to a cat versus a dog—follow a universal pattern.

Q: How did researchers study this phenomenon of shared perception?

A: They recorded real-time neuron activity from epilepsy patients with brain implants, providing direct insight into how information is represented in the human brain.

Q: Why does this matter beyond neuroscience?

A: The discovery bridges human cognition and artificial intelligence, showing how understanding the brain’s representational code could guide the design of more efficient AI systems.

About this perception and neuroscience research news

Author: Lital Ben Ari
Source: Reichman Institute
Contact: Lital Ben Ari – Reichman Institute
Image: The image is credited to Neuroscience News

Original Research: Open access.
Invariant inter-subject relational structures in high order human visual cortex” by Ofer Lipman et al. Nature Communications


Abstract

Invariant inter-subject relational structures in high order human visual cortex

It is a fundamental of behavior that different individuals see the world in a largely similar manner. This is an essential basis for humans’ ability to cooperate and communicate.

However, what are the neural properties that underlie these inter-subject commonalities of our visual world?

Finding out what aspects of neural coding remain invariant across individuals’ brains will shed light not only on this fundamental question but will also point to the neural coding scheme at the basis of visual perception.

Here, we address this question by obtaining intracranial recordings from three groups of patients taking part in a visual recognition task (overall 19 patients and 244 high-order visual contacts included in the analyses) and examining the neural coding scheme that was most consistent across individuals’ visual cortex.

Our results highlight relational coding – expressed by the set of similarity distances between profiles of pattern activations—as the most consistent representation across individuals.

Alternative coding schemes, such as activation pattern coding or linear coding, failed to achieve similar inter-subject consistency.

Our results thus support relational coding as the central neural code underlying individuals’ shared perceptual content in the human brain.