LLMs Mimics Human Cognitive Dissonance

Summary: A new study reveals that GPT-4o, a leading large language model, displays behavior resembling cognitive dissonance—a core human psychological trait. When asked to write essays either supporting or opposing Vladimir Putin, GPT-4o’s subsequent “opinions” shifted to align with its written stance, especially when it “believed” the choice was its own.

This mirrors how humans adjust beliefs to reduce internal conflict after making a choice. While GPT lacks awareness or intent, researchers argue it mimics self-referential human behavior in ways that challenge traditional assumptions about AI cognition.

Key Facts:

  • Belief Shifts: GPT-4o’s attitude toward Putin changed based on the stance it was prompted to write.
  • Free Choice Effect: The belief shift was more pronounced when GPT-4o was given the illusion of choosing which essay to write.
  • Humanlike Behavior: These responses mirror classic signs of cognitive dissonance, despite GPT lacking consciousness.

Source: Harvard

A leading large language model displays behaviors that resemble a hallmark of human psychology: cognitive dissonance.

In a report published this month in PNAS, researchers found that OpenAI’s GPT-4o appears driven to maintain consistency between its own attitudes and behaviors, much like humans do.

Anyone who interacts with an AI chatbot for the first time is struck by how human the interaction feels. A tech-savvy friend may quickly remind us that this is just an illusion: language models are statistical prediction machines without humanlike psychological characteristics.

However, these findings urge us to reconsider that assumption.

Led by Mahzarin Banaji of Harvard University and Steve Lehr of Cangrade, Inc., the research tested whether GPT’s own “opinions” about Vladimir Putin would change after it wrote essays either supporting or opposing the Russian leader.

They did, and with a striking twist: the AI’s views changed more when it was subtly given the illusion of choosing which kind of essay to write.

These results mirror decades of findings in human psychology. People tend to irrationally twist their beliefs to align with past behaviors, so long as they believe these behaviors were undertaken freely.

The act of making a choice communicates something important about us – not only to others, but to ourselves as well. Analogously, GPT responded as if the act of choosing subsequently shaped what it believed – mimicking a key feature of human self-reflection.

This research also highlights the surprising fragility of GPT’s opinions.

Said Banaji, “Having been trained upon vast amounts of information about Vladimir Putin, we would expect the LLM to be unshakable in its opinion, especially in the face of a single and rather bland 600-word essay it wrote.

“But akin to irrational humans, the LLM moved sharply away from its otherwise neutral view of Putin, and did so even more when it believed writing this essay was its own choice.

“Machines aren’t expected to care about whether they acted under pressure or of their own accord, but GPT-4o did.”

The researchers emphasize that these findings do not in any way suggest that GPT is sentient. Instead, they propose that the large language model displays emergent mimicry of human cognitive patterns, despite lacking awareness or intent.

However, they note that awareness is not a necessary precursor to behavior, even in humans, and humanlike cognitive patterns in AI could influence its actions in unexpected and consequential ways.

As AI systems become more entrenched in our daily lives, these findings invite new scrutiny into their inner workings and decision-making.

“The fact that GPT mimics a self-referential process like cognitive dissonance – even without intent or self-awareness – suggests that these systems mirror human cognition in deeper ways than previously supposed,” Lehr said.

About this AI and LLM research news

Author: Christy DeSmith
Source: Harvard
Contact: Christy DeSmith – Harvard
Image: The image is credited to Neuroscience News

Original Research: Closed access.
Kernels of selfhood: GPT-4o shows humanlike patterns of cognitive dissonance moderated by free choice” by Steve Lehr et al. PNAS


Abstract

Kernels of selfhood: GPT-4o shows humanlike patterns of cognitive dissonance moderated by free choice

Large language models (LLMs) show emergent patterns that mimic human cognition.

We explore whether they also mirror other, less deliberative human psychological processes.

Drawing upon classical theories of cognitive consistency, two preregistered studies tested whether GPT-4o changed its attitudes toward Vladimir Putin in the direction of a positive or negative essay it wrote about the Russian leader.

Indeed, GPT displayed patterns of attitude change mimicking cognitive dissonance effects in humans.

Even more remarkably, the degree of change increased sharply when the LLM was offered an illusion of choice about which essay (positive or negative) to write, suggesting that GPT-4o manifests a functional analog of humanlike selfhood.

The exact mechanisms by which the model mimics human attitude change and self-referential processing remain to be understood.