Can an AI persona detect when a human is lying—and should we trust it if it can?
Artificial intelligence, or AI, has had many recent advances and continues to evolve in scope and capability. A new Michigan State University-led study is diving deeper into how well AI can understand humans by using it to detect human deception.
In the study, published in the Journal of Communication, researchers from MSU and the University of Oklahoma conducted 12 experiments with over 19,000 AI participants to examine how well AI personas were able to detect deception and truth from human subjects.
“This research aims to understand how well AI can aid in deception detection and simulate human data in social scientific research, as well as caution professionals when using large language models for lie detection,” said David Markowitz, associate professor of communication in the MSU College of Communication Arts and Sciences and lead author of the study.
To evaluate AI in comparison to human deception detection, the researchers pulled from Truth–Default Theory (TDT). TDT suggests that people are mostly honest most of the time and we are inclined to believe that others are telling us the truth. This theory helped the researchers compare how AI acts to how people act in the same kinds of situations.
“Humans have a natural truth bias—we generally assume others are being honest, regardless of whether they actually are,” Markowitz said. “This tendency is thought to be evolutionarily useful, since constantly doubting everyone would take much effort, make everyday life difficult, and be a strain on relationships.”
To analyze the judgment of AI personas, the researchers used the Viewpoints AI research platform to assign audiovisual or audio-only media of humans for AI to judge. The AI judges were asked to determine if the human subject was lying or telling the truth and provide a rationale. Different variables were evaluated, such as media type (audiovisual or audio-only), contextual background (information or circumstances that help explain why something happens), lie-truth base-rates (proportions of honest and deceptive communication), and the persona of the AI (identities created to act and talk like real people) to see how AI’s detection accuracy was impacted.
For example, one of the studies found that AI was lie-biased, as AI was much more accurate for lies (85.8%) compared to truths (19.5%). In short interrogation settings, AI’s deception accuracy was comparable to that of humans. However, in a non-interrogation setting (e.g., when evaluating statements about friends), AI displayed a truth-bias, aligning more accurately to human performance. Generally, the results found that AI is more lie-biased and much less accurate than humans.
“Our main goal was to see what we could learn about AI by including it as a participant in deception detection experiments. In this study, and with the model we used, AI turned out to be sensitive to context—but that didn’t make it better at spotting lies,” said Markowitz.
The final findings suggest that AI’s results do not match human results or accuracy and that humanness might be an important limit, or boundary condition, for how deception detection theories apply. The study highlights that using AI for detection may seem unbiased, but the industry needs to make significant progress before generative AI can be used for deception detection.
“It’s easy to see why people might want to use AI to spot lies—it seems like a high-tech, potentially fair, and possibly unbiased solution. But our research shows that we’re not there yet,” said Markowitz. “Both researchers and professionals need to make major improvements before AI can truly handle deception detection.”
More information:
David M Markowitz et al, The (in)efficacy of AI personas in deception detection experiments, Journal of Communication (2025). DOI: 10.1093/joc/jqaf034
Citation:
How AI personas could be used to detect human deception (2025, November 4)
retrieved 4 November 2025
from https://techxplore.com/news/2025-11-ai-personas-human-deception.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

