Caught in a social media echo chamber? AI can help you out

A new study involving Binghamton University researchers offers a promising solution: developing an AI system to map out interactions between content and algorithms on digital platforms to reduce the spread of potentially harmful or misleading content. Credit: Binghamton University, State University of New York

Falling for clickbait is easy these days, especially for those who mainly get their news through social media. Have you ever noticed your feed littered with articles that look alike?

Thanks to artificial intelligence (AI) technologies, the spread of mass-produced contextually relevant articles and comment-laden social media posts has become so commonplace that it can appear as though it’s coming from different information sources. The resulting “echo chamber” effect could reinforce a person’s existing perspectives, regardless of whether that information is accurate.

A new study involving Binghamton University, State University of New York researchers offers a promising solution: developing an AI system to map out interactions between content and algorithms on digital platforms to reduce the spread of potentially harmful or misleading content. That content can be amplified through engagement-focused algorithms, the study noted, and enable conspiracy theories to spread, especially if the content is emotionally charged or polarizing.

Researchers believe their proposed AI framework would counter this by allowing users and social media platform operators—Meta or X, for example—to pinpoint sources of potential misinformation and remove them if necessary. More importantly, it would make it easier for their platforms to promote diverse information sources to audiences.

“The online/social media environment provides ideal conditions for that echo chamber effect to be triggered because of how quickly we share information,” said study co-author Thi Tran, assistant professor of management information systems at the Binghamton University School of Management. “People create AI, and just as people can be good or bad, the same applies to AI. Because of that, if you see something online, whether it is something generated by humans or AI, you need to question whether it’s correct or credible.”

Researchers noted that digital platforms facilitate echo chamber dynamics by optimizing content delivery based on engagement metrics and behavioral patterns. Close interactions with like-minded people on social media can amplify a person’s biased cherry-picking tendency when choosing information messages to react to, leading to diverse perspectives being filtered out.

The study tested this theory by randomly surveying 50 college students, each reacting to five misinformation claims about the COVID-19 vaccine:

  • Vaccines are used to implant barcodes in the population.
  • COVID-19 variants are becoming less lethal.
  • COVID-19 vaccines pose greater risks to children than the virus itself.
  • Natural remedies and alternative medicines can replace COVID-19 vaccines.
  • The COVID-19 vaccine was developed as a tool for global population control.

Here is how the survey’s participants responded:

  • 90% stated they would still get the COVID-19 vaccine after hearing the misinformation claims.
  • 70% indicated they would share the information on social media, more so with friends or family than with strangers.
  • 60% identified the claims as false information.
  • 70% expressed a need to conduct more research to verify the falsehood.

According to the study, these responses highlighted a critical aspect of the dynamics of misinformation: many people could recognize false claims but also felt compelled to seek more evidence before dismissing them outright.

“We all want information transparency, but the more you are exposed to certain information, the more you’re going to believe it’s true, even if it’s inaccurate,” Tran said. “With this research, instead of asking a fact-checker to verify each piece of content, we can use the same generative AI that the ‘bad guys’ are using to spread misinformation on a larger scale to reinforce the type of content people can rely on.”

The research paper, “Echoes Amplified: A Study of AI-Generated Content and Digital Echo Chambers,” was presented at a conference organized by the Society of Photo-Optical Instrumentation Engineers (SPIE). It was also authored by Binghamton’s Seden Akcinaroglu, a professor of political science; Nihal Poredi, a Ph.D. student in the Thomas J. Watson College of Engineering and Applied Science; and Ashley Kearney from Virginia State University.

More information:
Ashley Kearney et al, Echoes amplified: a study of AI-generated content and digital echo chambers, Disruptive Technologies in Information Sciences IX (2025). DOI: 10.1117/12.3053447

Provided by
Binghamton University


Citation:
Caught in a social media echo chamber? AI can help you out (2025, August 15)
retrieved 15 August 2025
from https://techxplore.com/news/2025-08-caught-social-media-echo-chamber.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.