When the social media platform X (formerly Twitter) invited users to flag false or misleading posts, critics initially scoffed. How could the same public that spreads misinformation be trusted to correct it? But a recent study by researchers from the University of Rochester, the University of Illinois Urbana–Champaign, and the University of Virginia finds that “crowdchecking” (X’s collaborative fact-checking experiment known as Community Notes) actually works.
X posts with public correction notes were 32 percent more likely to be deleted by the authors than those with just private notes.
The paper, published in the journal Information Systems Research, shows that when a community note about a post’s potential inaccuracy appears beneath a tweet, its author is far more likely to retract that tweet.
“Trying to define objectively what misinformation is and then removing that content is controversial and may even backfire,” notes co-author Huaxia Rui, the Xerox Professor of Information Systems and Technology at URochester’s Simon Business School. “In the long run, I think a better way for misleading posts to disappear is for the authors themselves to remove those posts.”
Using a causal inference method called regression discontinuity and a vast dataset of X posts (previously known as tweets), the researchers find that public, peer-generated corrections can do something experts and algorithms have struggled to achieve. Showing some notes or corrective content alongside potentially misleading information, Rui says, can indeed “nudge the author to remove that content.”
Community Notes on X: An experiment in public correction
Community Notes operates on a threshold mechanism. For a corrective note to appear publicly, it must earn a “helpfulness” score of at least 0.4. (A proposed note is first shown to contributors for evaluation. The bridging algorithm used by Community Notes prioritizes ratings from a diverse range of users—specifically, from people who have disagreed in their past ratings—to prevent partisan group voting that could otherwise manipulate a note’s visibility).
Conversely, notes that fall just below that threshold stay hidden to the public. That design allows for a natural experiment as the researchers were able to compare X posts with notes just above and below the cutoff (i.e., visible to the public versus visible only to Community Notes contributors )—thereby enabling them to measure the causal effect of public exposure.
In total, the researchers analyzed 264,600 posts on X that received at least one community note during two separate time intervals—the first before a U.S. presidential election, which is a time when misinformation typically surges (June–August 2024), and the second two months after the election (January–February 2025).
The results were striking: X posts with public correction notes were 32 percent more likely to be deleted by the authors than those with just private notes, demonstrating the power of voluntary retraction as an alternative to forcible removal of content. The effect persisted across both study periods.
The reputation effect
An author’s decision to retract or delete, the team discovered, is primarily driven by social concerns. “You worry,” says Rui, “that it’s going to hurt your online reputation if others find your information misleading.”
Publicly displayed Community Notes (highlighting factual inaccuracies) function as a signal to the online audience that “the content—and, by extension, its author—is untrustworthy,” the researchers note.
In the social media ecosystem, reputation is important—especially for users with influence—and speed matters greatly, as misinformation tends to spread faster and farther than corrections.
The researchers found that public notes not only increased the likelihood of tweet deletions but also accelerated the process: among retracted X posts, the faster notes are publicly displayed, the sooner the noted posts are retracted.
Those whose posts attract substantial visibility and engagement or who have large follower bases, face heightened reputational risks. As a result, verified X users (those marked by a blue check mark) were particularly quick to delete their posts when they garnered public Community Notes, exhibiting a greater concern for maintaining their credibility.
The overall pattern suggests that social media’s own dynamics, such as status, visibility, and peer feedback, can improve online accuracy.
A democratic defense against misinformation?
Crowdchecking, the team concludes, “strikes a balance between protecting First Amendment rights and the urgent need to curb misinformation.” It relies not on censorship but on collective judgment and public correction. The algorithm employed by Community Notes emphasizes diversity and views that are supported by both sides.
Initially, Rui admits, he was surprised by the team’s strong findings. “For people to be willing to retract, it’s like admitting their mistakes or wrongdoing, which is difficult for anyone, especially in today’s super polarized environment with all its echo chambers,” he says.
At the outset of the study, the team had wondered if the correcting mechanisms might even backfire. In other words, could a public display note really induce people to retract their problematic posts or would it make them dig in their heels?
Now they know it works.
“Ultimately,” Rui says, “the voluntary removal of misleading or false information is a more civic and possibly more sustainable way to resolve problems.”
More information:
Yang Gao et al, Can Crowdchecking Curb Misinformation? Evidence from Community Notes, Information Systems Research (2025). DOI: 10.1287/isre.2024.1609
Citation:
The most effective online fact-checkers? Your peers (2025, November 17)
retrieved 17 November 2025
from https://techxplore.com/news/2025-11-effective-online-fact-checkers-peers.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

