Brain-inspired neural networks reveal insights into biological basis of relational learning

Diagram of plastic neural network. These networks are similar to traditional neural networks, but include plastic connections (in red) which can change as a result of a plasticity signal (red arrow in loop) that is self-generated by the network. Credit: Thomas Miconi and Kenneth Kay.

Humans and certain animals appear to have an innate capacity to learn relationships between different objects or events in the world. This ability, known as “relational learning,” is widely regarded as critical for cognition and intelligence, as learned relationships are thought to allow humans and animals to navigate new situations.

Researchers at ML Collective in San Francisco and Columbia University have conducted a study aimed at understanding the biological basis of relational learning by using a particular type of brain-inspired artificial neural network. Their work, published in Nature Neuroscience, sheds new light on the processes in the brain that could underpin relational learning in humans and other organisms.

“While I was visiting Columbia University, I met my co-author Kenneth Kay and we talked about his research,” Thomas Miconi, co-author of the paper, told Medical Xpress.

“He was training neural networks to do something called ‘transitive inference,” and I didn’t know what that was at the time. The basic idea of transitive inference is simple: ‘if A > B and B > C, then A > C.’ That’s a concept we’re all familiar with and is actually essential to a lot of our understanding of the world.”

Past work indicates that when humans and some animals perform certain psychological tasks, they appear to grasp relationships between objects, even if these relationships are not explicitly provided. In tasks known as transitive inference tasks, they can figure out ordering relationships (i.e., A is “>” or “<” than B, etc.) for themselves, after being presented with pairs of stimuli and seeing the outcome of various comparisons (i.e., “A vs. B,” “B vs. A,” “B vs. C,” etc.).

“In keeping with this, the ‘A,’ ‘B,’ ‘C’ are totally arbitrary stimuli, like odors or images, which don’t ‘give away’ the relationship,” explained Miconi. “If the ordering relationship is successfully learned, then subjects can answer correctly when they see ‘A vs. C’—that’s transitive inference. What’s been known for a long time is that humans and many animal species (such as rats, pigeons, and monkeys) get the correct answer on ‘A vs. C’ and other similar combinations of stimuli never directly seen before (e.g. ‘B vs. F’).”

Past studies found that when trained on “adjacent” pairs of stimuli (e.g., A-B, C-D, etc.), humans, rats, pigeons and monkeys can learn to correctly guess the ordering relationship for pairs they were not presented with before (e.g., A-E, C-F, etc.). The processes in the brain underlying this well-reported capability, however, remain poorly understood.

“It was intriguing to hear about this ability and these findings, not only because of the intuitive, relational, and combinatorial nature of the task (which is unconventional among currently popular tasks in neuroscience), but also because despite considerable study, we still do not know how the brain learns orderings in a way that automatically produces transitive inference,” said Miconi.

“In our discussion, one thing that made matters even more interesting was an additional finding from past work: namely, that humans and monkeys (but not pigeons or rodents) have been found to be able to quickly ‘rearrange’ their existing knowledge of orderings after encountering a small bit of new information.”

Interestingly, additional past research showed that if humans and monkeys successfully learned the ordering relationships between different sets of stimuli, for instance “A > B > C” and “D > E > F,” once they learn that “C > D,” they will instantly know that “B > E.” This shows that their brains can re-organize previous knowledge based on new information; a process that has been termed “knowledge re-assembly.”

“This struck us as an additional ability worth looking into, since it is a simple yet dramatic instance of learning or acquiring knowledge,” said Miconi.

“At some point, we realized that it might be possible to get insight into how the brain has either of these abilities by taking the approach of an area in machine intelligence called ‘meta-learning,’ which adopts the basic idea of ‘learning to learn.”

“For an artificial system, the idea is that instead of training the system (like a neural network) to give the correct answer for a particular set of stimuli (e.g. stimuli ‘A,’ ‘B,’ ‘C’), we could instead train a system to learn by itself the correct answer for any new set of stimuli (e.g. stimuli ‘P,’ ‘Q,’ ‘R,’ etc.), much like animals are tasked with doing in experiments.”

To explore the underpinnings of these various aspects of relational learning, Miconi and Kay looked to emulate relational learning using a newly developed type of artificial neural network inspired by brain circuits. Miconi and Kay assessed whether this type of network was able to learn relationships on its own, potentially mimicking the relational learning and knowledge re-assembly observed in humans and primates.

“Maybe the most exciting part of this approach—and what we’re really looking for as scientists—would then be to analyze that system and understand how it works—by doing so, it’s actually possible to discover biologically plausible mechanisms,” said Miconi. “We thought it would be pretty convenient if machines could be part of the process to help us do this!”

The artificial neural networks utilized by the researchers have a conventional architecture, but with a key unique feature. Specifically, the networks were augmented with an artificial version of “synaptic plasticity,” which means that they could change their own synaptic weights after completing their initial training.

“These networks can learn autonomously because their connections change as a result of ongoing neural activity, and this ongoing neural activity includes self-generated activity,” explained Miconi.

“The rationale for studying these networks is that their basic architecture and learning processes mimic those of real brains. I had some existing code from previous work that I thought could be quickly re-purposed for this problem. By some kind of miracle, it worked the first time, which never happens.”

Using some code that Miconi developed as part of his previous research, the researchers applied the synaptic plasticity-augmented artificial neural networks to tasks used to test the relational learning abilities in humans and animals.

They found that their neural networks could solve these tasks, and also consistently attained similar behaviors to those achieved by humans and some animals as documented in previous studies.

“For example, one behavioral pattern is that performance is better for pairs of stimuli farther apart in the ordering (e.g. B vs. F has higher performance compared to B vs. C),” explained Miconi. “What was also really exciting is that some of these experimentally observed behavioral patterns had never been explained in a model.”

Overall, the recent paper by Miconi and Kay pin-points several mechanisms that could underpin the relational learning and knowledge assembly abilities of biological organisms. In the future, the mechanisms they identified could be investigated further, by further study of either artificial neural networks or humans and animals.

“The more specific contribution of our work is the elucidation of learning mechanisms for transitive inference: in particular, learning mechanisms which can explain a collection of behavioral patterns seen across decades of work on transitive inference,” said Miconi. “One striking result is that the meta-learning approach actually found two different learning mechanisms.”

The two learning mechanisms unveiled by Miconi and Kay vary in complexity. The first is simpler and only allowed their neural networks to learn general relations, without re-assembling knowledge. The second is more sophisticated, allowing the neural networks to update information about a new pair of stimuli it is presented with, while also “recalling” stimuli that it had previously “seen” together with the stimuli in this new pair.

“This deliberate, targeted ‘recall’ is what enables the network to perform knowledge reassembly, unlike the former, simpler one,” said Miconi.

“This is an intriguing parallel to the apparently different learning capacities across animal species documented for transitive inference. Again, many animals (rodents, pigeons, etc.) can do simple transitive inference, but only primates seem able to perform this fast ‘reassembly’ of existing knowledge in response to limited novel information. This also clarifies what learning systems would need to perform knowledge assembly.”

This recent study also highlights the potential of neural networks augmented with self-directed synaptic plasticity for studying processes underpinning learning in humans and animals. The team’s methods could serve as an inspiration for future works aimed at exploring biological mechanisms using brain-inspired artificial neural networks.

“Nowadays, it is quite common to train and analyze artificial neural networks on single instances of a task, and this has been shown to be successful in discovering biological mechanisms for abilities like perception and decision-making,” said Miconi.

“With plastic neural networks, this approach is extended to discovering biological mechanisms for cognitive learning—more specifically, for learning many possible instances of a given task, and also potentially multiple tasks.”

The initial results gathered by Miconi and Kay could serve as a basis for future efforts aimed at shedding light on the intricacies of relational learning. In future work, the researchers anticipate testing their “plastic” neural networks on a wider range of tasks, which are more aligned with the situations that humans and animals encounter in their daily lives.

“In the study, the system only ever performs one task—learning the ordering relationship (‘A > B > C’),” added Miconi.

“This would be similar to an animal who has spent its whole life doing nothing but order learning before entering the lab, which is clearly not realistic. It would be interesting to see what kind of abilities emerge if we train a plastic network on a wide range of learning tasks.

“Would such an agent be able to generalize immediately to a new learning task that it didn’t see before, and what would it take for such an ability to emerge?”

More information:
Thomas Miconi et al, Neural mechanisms of relational learning and fast knowledge reassembly in plastic neural networks, Nature Neuroscience (2025). DOI: 10.1038/s41593-024-01852-8.

© 2025 Science X Network

Citation:
Brain-inspired neural networks reveal insights into biological basis of relational learning (2025, February 11)
retrieved 11 February 2025
from https://medicalxpress.com/news/2025-02-brain-neural-networks-reveal-insights.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.