AI Facial Analysis Detects PTSD

Summary: Diagnosing PTSD in children is often hindered by limited communication and emotional awareness, but new research is using AI to bridge that gap. By analyzing facial movements during interviews, researchers created a privacy-preserving tool that can identify PTSD-related expression patterns.

Their system does not use raw video but instead tracks non-identifying facial cues such as eye gaze and mouth movement. The study showed that children’s facial expressions during clinician-led sessions were especially revealing.

Key Facts:

  • Privacy-Preserving AI: The system uses de-identified facial movement data, not raw video, to protect privacy.
  • Objective PTSD Markers: Distinct facial expression patterns were found in children with PTSD.
  • Therapist Sessions Most Revealing: Children showed more emotional expression with clinicians than with parents.

Source: University of South Florida

Diagnosing post-traumatic stress disorder in children can be notoriously difficult. Many, especially those with limited communication skills or emotional awareness, struggle to explain what they’re feeling.

Researchers at the University of South Florida are working to address those gaps and improve patient outcomes by merging their expertise in childhood trauma and artificial intelligence. 

The findings revealed that distinct patterns are detectable in the facial movements of children with PTSD. Credit: Neuroscience News

Led by Alison Salloum, professor in the USF School of Social Work, and Shaun Canavan, associate professor in the Bellini College of Artificial Intelligence, Cybersecurity and Computing, the interdisciplinary team is building a system that could provide clinicians with an objective, cost-effective tool to help identify PTSD in children and adolescents, while tracking their recovery over time.

The study, published in Pattern Recognition Letters, is the first of its kind to incorporate context-aware PTSD classification while fully preserving participant privacy.

Traditionally, diagnosing PTSD in children relies on subjective clinical interviews and self-reported questionnaires, which can be limited by cognitive development, language skills, avoidance behaviors or emotional suppression. 

“This really started when I noticed how intense some children’s facial expressions became during trauma interviews,” Salloum said. “Even when they weren’t saying much, you could see what they were going through on their faces. That’s when I talked to Shaun about whether AI could help detect that in a structured way.”

Canavan, who specializes in facial analysis and emotion recognition, repurposed existing tools in his lab to build a new system that prioritizes patient privacy. The technology strips away identifying details and only analyzes de-identified data, including head pose, eye gaze and facial landmarks, such as the eyes and mouth. 

“That’s what makes our approach unique,” Canavan said. “We don’t use raw video. We completely get rid of the subject identification and only keep data about facial movement, and we factor in whether the child was talking to a parent or a clinician.”

The team built a dataset from 18 sessions with children as they shared emotional experiences. With more than 100 minutes of video per child and each video containing roughly 185,000 frames, Canavan’s AI models extracted a range of subtle facial muscle movements linked to emotional expression.

The findings revealed that distinct patterns are detectable in the facial movements of children with PTSD.  The researchers also found that facial expressions during clinician-led interviews were more revealing than parent-child conversations.

This aligns with existing psychological research showing children may be more emotionally expressive with therapists and may avoid sharing distress with parents due to shame or their cognitive abilities.

“That’s where the AI could offer a valuable supplement,” Salloum said. “Not replacing clinicians, but enhancing their tools. The system could eventually be used to give practitioners real-time feedback during therapy sessions and help monitor progress without repeated, potentially distressing interviews.”

The team hopes to expand the study to further examine any potential bias from gender, culture and age, especially preschoolers, where verbal communication is limited and diagnosis relies almost entirely on parent observation. 

Though the study is still in its early stages, Salloum and Canavan feel the potential applications are far-reaching. Many of the current participants had complex clinical pictures, including co-occurring conditions like depression, ADHD or anxiety, mirroring real-world cases and offering promise for the system’s accuracy. 

“Data like this is incredibly rare for AI systems, and we’re proud to have conducted such an ethically sound study. That’s crucial when you’re working with vulnerable subjects,” Canavan said. “Now we have promising potential from this software to give informed, objective insights to the clinician.”

If validated in larger trials, USF’s approach could redefine how PTSD in children is diagnosed and tracked, using everyday tools like video and AI to bring mental health care into the future.

About this AI and PTSD research news

Author: John Dudley
Source: University of South Florida
Contact: John Dudley – University of South Florida
Image: The image is credited to Neuroscience News

Original Research: Open access.
Multimodal, context-based dataset of children with Post Traumatic Stress Disorder” by Alison Salloum et al. Pattern Recognition Letters


Abstract

Multimodal, context-based dataset of children with Post Traumatic Stress Disorder

The conventional method of diagnosing Post Traumatic Stress Disorder by a clinician has been subjective in nature by taking specific events/context in consideration.

Developing AI-based solutions to these sensitive areas calls for adopting similar methodologies.

Considering this, we propose a de-identified dataset of children subjects who are clinically diagnosed with/without PTSD in multiple contexts.

This datset can help facilitate future research in this area.

For each subject, in the dataset, the participant undergoes several sessions with clinicians and/or guardian that brings out various emotional response from the participant.

We collect videos of these sessions and for each video, we extract several facial features that detach the identity information of the subjects.

These include facial landmarks, head pose, action units (AU), and eye gaze.

To evaluate this dataset, we propose a baseline approach to identifying PTSD using the encoded action unit (AU) intensities of the video frames as the features.

We show that AU intensities intrinsically captures the expressiveness of the subject and can be leveraged in modeling PTSD solutions.

The AU features are used to train a transformer for classification where we propose encoding the low-dimensional AU intensity vectors using a learnable Fourier representation.

We show that this encoding, combined with a standard Multilayer Perceptron (MLP) mapping of AU intensities yields a superior result when compared to its individual counterparts.

We apply the approach to various contexts of PTSD discussions (e.g., Clinician-child discussion) and our experiments show that using context is essential in classifying videos of children.