Summary: For decades, neuroscience textbooks have taught that the first stage of visual processing relies on two types of cells specialized in detecting “edges”—sharp transitions between light and dark. However, an international team has shattered this old model. By using AI to create “digital twins” of mouse neurons, researchers discovered a third, previously unknown type of neuron with a two-part receptive field.
One part identifies textures (like fur or feathers), while the other recognizes specific arrangements (like a nose and mouth). This discovery explains how the brain separates complex objects from their backgrounds far more efficiently than simple edge detection alone would allow.
Key Facts
- The “Digital Twin” Advantage: Researchers used deep neural networks to simulate individual mouse neurons, allowing them to predict which specific images would “fire” the cells before testing them in real brains.
- Bipartite Receptive Fields: Unlike traditional cells that only respond to brightness and orientation, these new neurons have two distinct parts that specialize in different spatial frequencies.
- Texture vs. Arrangement: One part of the cell reacts to high-frequency details (dense patterns/textures), while the other reacts to low-frequency areas (broader shapes and arrangements).
- Object Separation: These neurons are specifically tuned to the signals needed to distinguish an object (like a bird) from its background (like a tree), a task that simple “edge” cells struggle with.
- AI-Validated Science: The Göttingen team’s AI predictions were confirmed through targeted experiments in real mouse brains at Stanford, proving the AI wasn’t just “hallucinating” the cells.
Source: University of Gottingen
The visual cortex is the part of the brain that enables visual perception. In this area millions of nerve cells, called neurons, process stimuli from the outside world. They only react when objects with certain characteristics come into our field of vision.
According to textbooks, the first stage of the visual cortex has two main types of neurons that specialize in edges – sharp transitions between light and dark.
An international team of researchers from Stanford University and the University of Göttingen has now used machine learning techniques to find neurons in mice that use a previously unknown process in the brain to share this cognitive processing. These neurons respond to different “spatial frequencies”, meaning the change in patterns of different objects in the visual field.
The research was published in Nature Neuroscience.
For their discovery, the researchers used deep neural networks, also used in AI models, to create digital twins of mouse neurons. These models can predict the activity of individual neurons and thus systematically investigate which images activate cells best. Researchers from Göttingen University played a key role in the development of these digital twins.
“Neural networks are essential tools for discovering new properties from large data sets – such as these novel neuronal properties,” explains Professor Fabian Sinz at Göttingen University’s Institute of Computer Science.
“The predicted best images are not fantasies of our AI model,” emphasizes Professor Alexander Ecker at the same institute.
“Targeted experiments in real mouse brains, led by researchers at Stanford University, have confirmed the properties predicted by our model are real.”
Each neuron in the visual cortex is responsible for a specific area of the visual field. The neuron only reacts when an appropriate stimulus appears in the relevant part of the visual field – such as an edge in the upper left corner of the field of vision.
The relevant area is known as the neuron’s “receptive field”. Classic textbook models distinguish between two types of neurons in the visual system: “simple cells” which are stimulated when an edge – meaning a sharp transition between light and dark – appears at a specific position in their receptive field; and “complex cells” which also respond to edges, but regardless of their exact position, as long as the edge has a preferred orientation. Both cell types are therefore specialized in detecting differences in brightness.
The newly discovered neurons have a two-part receptive field: one part responds to textures, such as the detailed patterns found in the background of a photo or a bird’s plumage; the other part is only stimulated when patterns are precisely arranged, such as the mouth and nose on a face.
The key factor is that both parts specialize in different “spatial frequencies”, meaning how often patterns such as bars or pixels repeat per unit of distance. A high frequency describes a dense pattern with fine details and sharp lines, while a low frequency describes a coarse pattern with larger, uniform areas.
“Classic simple and complex cells are tuned to simple edges defined by differences in brightness,” summarizes Professor Andreas Tolias, Stanford University.
“In contrast, the two-part neurons we found respond to more complex information about edges – that is, differences in texture or spatial frequency. These are precisely the kinds of signals needed to separate an object from its background.”
Key Questions Answered:
A: We knew the basics! For 50 years, we thought the visual cortex was just an “edge detector.” But this study shows it’s much more like a high-tech photo editor. It doesn’t just see lines; it sees the difference between the “texture” of a sweater and the “shape” of the person wearing it simultaneously.
A: It’s a virtual model of a single brain cell. Scientists used AI to build a computer version of a mouse neuron that behaves exactly like the real thing. They can show the “twin” millions of images in seconds to see what makes it react, then go back to the real mouse to confirm the findings. It’s like a shortcut to understanding the brain’s software.
A: Absolutely. Most computer vision today is still based on the old “edge detection” model. By mimicking these newly discovered two-part neurons, we could build AI that is much better at identifying objects in messy, cluttered environments—just like a mouse (or a human) does.
Editorial Notes:
- This article was edited by a Neuroscience News editor.
- Journal paper reviewed in full.
- Additional context added by our staff.
About this visual neuroscience and AI research news
Author: Melissa Sollich
Source: University of Gottingen
Contact: Melissa Sollich – University of Gottingen
Image: The image is credited to Neuroscience News
Original Research: Open access.
“Functional bipartite invariance in mouse primary visual cortex receptive fields” by Zhiwei Ding, Dat Tran, Kayla Ponder, Zhuokun Ding, Rachel Froebe, Lydia Ntanavara, Paul G. Fahey, Erick Cobos, Luca Baroni, Maria Diamantaki, Eric Y. Wang, Andersen Chang, Stelios Papadopoulos, Jiakun Fu, Taliah Muhammad, Christos Papadopoulos, Santiago A. Cadena, Alexandros Evangelou, Konstantin Willeke, Fabio Anselmi, Sophia Sanborn, Jan Antolik, Emmanouil Froudarakis, Saumil Patel, Edgar Y. Walker, Jacob Reimer, Fabian H. Sinz, Alexander S. Ecker, Katrin Franke, Xaq Pitkow & Andreas S. Tolias. Nature Neuroscience
DOI:10.1038/s41593-026-02213-3
Abstract
Functional bipartite invariance in mouse primary visual cortex receptive fields
Sensory systems support generalization by representing features that persist under input variation; however, identifying the neuronal basis of these invariances remains difficult due to high-dimensional and nonlinear neural computations.
Here we leverage the inception loop paradigm, iterating between large-scale recordings, predictive models and in silico experiments with in vivo verification, to characterize neuronal invariances in mouse primary visual cortex (V1). We synthesize varied exciting inputs (VEIs), dissimilar images that drive target neurons.
These VEIs revealed a new bipartite invariance: one subfield encodes a shift-tolerant high-frequency texture and the other encodes a fixed low-frequency pattern. This division aligns with object boundaries defined by spatial frequency differences in highly activating images, suggesting a contribution to segmentation.
Analysis of the MICrONS dataset revealed a hierarchy of excitatory neurons in mouse V1 layers 2/3: postsynaptic neurons exhibited greater invariance than their presynaptic inputs, while neurons with lower invariance formed more connections.
Together, these results provide insights and scalable methodology for mapping neuronal invariances.

