AI Can Spot Lies, But Not as Well as Humans Can

Summary: A large-scale study tested whether AI personas can detect when humans are lying—and found that while AI can sometimes spot deception, it’s still far from trustworthy. Across 12 experiments involving 19,000 AI participants, the systems performed inconsistently, showing a strong bias toward identifying lies rather than truths.

In some cases, AI matched human accuracy, but in others, it failed to distinguish honest statements. The findings suggest that while AI can mimic human judgment, it lacks the emotional and contextual depth required to make reliable decisions about honesty.

Key Facts

  • Lie-Biased AI: AI detected lies more accurately (85.8%) than truths (19.5%), showing a strong bias.
  • Human-Like Limits: In non-interrogation contexts, AI mimicked human truth-bias but remained less accurate overall.
  • Ethical Caution: Researchers warn against using generative AI for real-world lie detection until major improvements are made.

Source: Michigan State University

Can an AI persona detect when a human is lying – and should we trust it if it can? Artificial intelligence, or AI, has had many recent advances and continues t  evolve in scope and capability.

A new Michigan State University–led study is diving deeper into how well AI can understand humans by using it to detect human deception. 

Generally, the results found that AI is more lie-biased and much less accurate than humans.Credit: Neuroscience News

In the study, published in the Journal of Communication, researchers from MSU and the University of Oklahoma conducted 12 experiments with over 19,000 AI participants to examine how well AI personas were able to detect deception and truth from human subjects.  

“This research aims to understand how well AI can aid in deception detection and simulate human data in social scientific research, as well as caution professionals when using large language models for lie detection,” said David Markowitz, associate professor of communication in the MSU College of Communication Arts and Sciences and lead author of the study.  

To evaluate AI in comparison to human deception detection, the researchers pulled from Truth-Default Theory, or TDT. TDT suggests that people are mostly honest most of the time and we are inclined to believe that others are telling us the truth. This theory helped the researchers compare how AI acts to how people act in the same kinds of situations. 

“Humans have a natural truth bias — we generally assume others are being honest, regardless of whether they actually are,” Markowitz said. “This tendency is thought to be evolutionarily useful, since constantly doubting everyone would take much effort, make everyday life difficult, and be a strain on relationships.” 

To analyze the judgment of AI personas, the researchers used the Viewpoints AI research platform to assign audiovisual or audio-only media of humans for AI to judge. The AI judges were asked to determine if the human subject was lying or telling the truth and provide a rationale.

Different variables were evaluated, such as media type (audiovisual or audio-only), contextual background (information or circumstances that help explain why something happens), lie-truth base-rates (proportions of honest and deceptive communication), and the persona of the AI (identities created to act and talk like real people) to see how AI’s detection accuracy was impacted.  

For example, one of the studies found that AI was lie-biased, as AI was much more accurate for lies (85.8%) compared to truths (19.5%). In short interrogation settings, AI’s deception accuracy was comparable to humans.

However, in a non-interrogation setting (e.g., when evaluating statements about friends), AI displayed a truth-bias, aligning more accurately to human performance. Generally, the results found that AI is more lie-biased and much less accurate than humans.  

“Our main goal was to see what we could learn about AI by including it as a participant in deception detection experiments. In this study, and with the model we used, AI turned out to be sensitive to context — but that didn’t make it better at spotting lies,” said Markowitz. 

The final findings suggest that AI’s results do not match human results or accuracy and that humanness might be an important limit, or boundary condition, for how deception detection theories apply.

The study highlights that using AI for detection may seem unbiased, but the industry needs to make significant progress before generative AI can be used for deception detection. 

“It’s easy to see why people might want to use AI to spot lies — it seems like a high-tech, potentially fair, and possibly unbiased solution. But our research shows that we’re not there yet,” said Markowitz.

“Both researchers and professionals need to make major improvements before AI can truly handle deception detection.” 

Key Questions Answered:

Q: What did researchers test in this study?

A: They examined how accurately artificial intelligence personas could detect human lies and truths across 12 experiments involving more than 19,000 AI participants.

Q: How well did AI perform compared to humans?

A: AI tended to be “lie-biased,” detecting lies better than truths—about 85.8% accuracy for lies but only 19.5% for truths—making it less reliable overall than human judges.

Q: What does this mean for using AI to detect deception?

A: The study shows that current AI systems are highly context-sensitive but lack the human nuance required for reliable deception detection, meaning they shouldn’t yet be trusted for critical real-world applications.

About this AI and lie detection research news

Author: Alex Tekip
Source: Michigan State University
Contact: Alex Tekip – Michigan State University
Image: The image is credited to Neuroscience News

Original Research: Open access.
The (in)efficacy of AI personas in deception detection experiments” by David Markowitz et al. Journal of Communication


Abstract

The (in)efficacy of AI personas in deception detection experiments

Artificial intelligence (AI) has recently been used to aid in deception detection and to simulate human data in social scientific research. Thus, it is important to consider how well these tools can inform both enterprises.

We report 12 studies, accessed through the Viewpoints.ai research platform, where AI (gemini-1.5-flash) made veracity judgments of humans.

We systematically varied the nature and duration of the communication, modality, truth-lie base rate, and AI persona. AI performed best (57.7%) when detecting truths and lies involving feelings about friends, although it was notably truth-biased (71.7%). However, in assessing cheating interrogations,

AI was lie-biased by judging more than three-quarters of interviewees as cheating liars. In assessing interviews where humans perform at rates over 70%, accuracy plummeted to 15.9% with an ecological base-rate.

AI yielded results different from prior human studies and therefore, we caution using certain large language models for lie detection.