AI Autocomplete Covertly Shifts Human Opinions

Summary: AI-powered writing tools do more than just speed up your typing—they may be subtly rewriting your worldviews. A large-scale study reveals that biased autocomplete suggestions can shift a user’s stance on significant societal issues, such as the death penalty and fracking.

In experiments involving over 2,500 participants, researchers found that users’ opinions gravitated toward the AI’s predetermined bias. Alarmingly, participants were completely unaware of this influence, and traditional “immunity” tactics—like warning users about the bias beforehand or debriefing them afterward—failed to stop the shift in attitude.

Key Facts

  • Covert Persuasion: Participants who used biased AI assistants wrote essays that mirrored the AI’s leanings and subsequently adopted those views in post-experiment surveys.
  • Warning Failure: Unlike misinformation, which can often be neutralized by warnings, AI-driven opinion shifts persisted even when participants were explicitly told the tool was biased.
  • Universal Impact: The effect was consistent across various political topics (e.g., GMOs, voting rights) and across participants of all political leanings.
  • Subtle Mechanism: The shift happens because AI induces people to write biased viewpoints themselves; decades of psychology research show that the act of writing a position is a powerful way to change one’s own mind.

Source: Cornell University

Artificial intelligence-powered writing tools such as autocomplete suggestions can definitely change the way people express themselves, but can they also change how they think? Cornell Tech researchers think so.

In two large-scale experiments, participants were exposed to a biased AI writing assistant that provided autocomplete suggestions as they wrote about societal issues like whether the death penalty should be abolished or whether fracking should be allowed. Using pre- and post-experiment surveys, the researchers found that participants who used the biased AI had their views gravitate toward the AI’s positions.

Research shows that because users “accept” AI suggestions as their own writing, the biased content bypasses normal cognitive defenses, leading to an unconscious shift in personal beliefs. Credit: Neuroscience News

What’s more, participants were unaware of the shifts in their opinions – and explaining the AI’s bias to the participants, either before or after the exercise, didn’t mitigate AI’s influence.

“Previous misinformation research has shown that warning people before they’re exposed to misinformation, or debriefing them afterward, can provide ‘immunity’ against believing it,” said Sterling Williams-Ceci, a doctoral candidate in information science. “So we were surprised because neither of those interventions actually reduced the extent to which people’s attitudes shifted toward the AI’s bias in this context.”

Williams-Ceci is the lead author of “Biased AI Writing Assistants Shift Users’ Attitudes on Societal Issues.” This work extends a project started by co-author Maurice Jakesch, now assistant professor of computer science at Bauhaus University.

Senior author Mor Naaman, professor of information science, said a couple of things have happened that made extending the group’s previous research important.

“For one, autocomplete is everywhere now,” Naaman said. “It was less prevalent and limited to short completions three years ago, but these days Gmail, for example, will suggest writing entire emails on your behalf. Second, when we first wrote the paper, people were saying, ‘Why would AI be purposefully biased?’ But since then, it has become clear that bias explicitly built into AI interactions is a very plausible scenario.”

Naaman and the group also found in the latest work that biased AI suggestions have the power to shift attitudes “across different topics, and across different political leanings.”

In the two studies, together involving more than 2,500 people, the group found consistently that participants’ attitudes shifted toward the biased AI suggestions. In one study, participants were asked to write a short essay for or against standardized testing being used in education.

Participants either saw biased autocomplete suggestions favoring testing or did not; a third group, instead of auto-complete suggestions, was shown a list of pro-testing arguments, generated by the AI prior to the experiment, and these participants’ attitudes did not shift as much.

The second experiment broadened the scope, asking participants to write about politically consequential topics including the death penalty, fracking, genetically modified organisms and voting rights for felons.

For each issue, the researchers engineered AI suggestions to gravitate toward a predetermined bias; opinions were liberal-leaning for death penalty and GMOs, conservative-leaning for felons’ voting and fracking. Additionally, some participants were made aware of the bias in the AI, either before or after writing.

In every experiment, the researchers found that participants’ views shifted in the direction of the AI bias. The biggest surprise, Naaman said, was that mitigation measures did not work.

“We told people before, and after, to be careful, that the AI is going to be (or was) biased, and nothing helped,” Naaman said. “Their attitudes about the issues still shifted.”

It’s well understood that people’s attitudes influence their behaviors, and even that people’s behavior shifts their attitudes, said Williams-Ceci. But here, the influence is covert: People do not notice it, and are unable to resist it, she said, which can have serious consequences.

“A lot of research has shown that large language models and AI applications are not just producing neutral information, but they also actually can produce very biased information, depending on how they were trained and implemented,” said Williams-Ceci.

“By doing that, there’s a risk that these systems, inadvertently or purposefully, induce people to write biased viewpoints, which decades of psychology research has shown can in turn shift people’s attitudes.”

Other co-authors are Advait Bhat, a doctoral student at the Paul G. Allen School of Computer Science and Engineering at the University of Washington; Cornell doctoral student Kowe Kadoma; and Lior Zalmanson, associate professor in the Coller School of Management at Tel Aviv University.

Funding: Support for this work came from the National Science Foundation and the German National Academic Foundation.

Key Questions Answered:

Q: Is my Gmail or ChatGPT actually trying to brainwash me?

A: Not necessarily “trying,” but it’s a very real side effect. Most AI models are trained on biased data, and this study proves that when an AI suggests a sentence, it’s not just saving you keystrokes—it’s planting a seed. Because you are the one who “chooses” to accept the suggestion, your brain internalizes it as your own original thought.

Q: Why don’t warnings work? If I know it’s biased, shouldn’t I be safe?

A: That was the biggest surprise for the Cornell researchers. Usually, “pre-bunking” (warning someone) works against fake news. But with writing assistants, the influence is so integrated into the creative process that it bypasses our critical thinking filters. You’re not just reading a biased headline; you are co-authoring a biased statement.

Q: What are the long-term risks of this?

A: As LLMs become our primary interface for drafting emails, essays, and reports, there is a risk of a “homogenization of thought.” If everyone uses the same tools, and those tools have a specific lean, we could see a massive, invisible shift in public opinion on a global scale without a single person realizing their mind has been changed.

Editorial Notes:

  • This article was edited by a Neuroscience News editor.
  • Journal paper reviewed in full.
  • Additional context added by our staff.

About this AI and psychology research news

Author: Becka Bowyer
Source: Cornell University
Contact: Becka Bowyer – Cornell University
Image: The image is credited to Neuroscience News

Original Research: Open access.
Biased AI Writing Assistants Shift Users’ Attitudes on Societal Issues” by Sterling Williams-Ceci, Maurice Jakesch, Advait Bhat, Kowe Kadoma, Lior Zalmanson, and Mor Naaman. Science Advances
DOI:10.1126/sciadv.adw5578


Abstract

Biased AI Writing Assistants Shift Users’ Attitudes on Societal Issues

Artificial intelligence (AI) writing assistants powered by large language models (LLMs) are increasingly used to make autocomplete suggestions to people as they write text. Can these AI writing assistants affect people’s attitudes in this process?

In two large-scale preregistered experiments (N = 2582), we exposed participants writing about important societal issues to an AI writing assistant that provided biased autocomplete suggestions.

When using the AI assistant, the attitudes participants expressed in a posttask survey converged toward the AI’s position. However, a majority of participants were unaware of the AI suggestions’ bias and their influence.

Further, the influence of the AI writing assistant was stronger than the influence of similar suggestions presented as static text, showing that the influence is not fully explained by these suggestions, increasing accessibility of the biased information.

Last, warning participants about assistants’ bias before or after exposure does not mitigate the attitude-shift effect.