Why Young Teens Are Vulnerable to Conversational AI

Summary: A national, peer-reviewed study reveals that nearly half of American teenagers using Conversational AI (CAI) chatbots have been exposed to significant digital, emotional, or behavioral harm. The study surveyed 3,466 adolescents aged 13 to 17.

While many youth leverage these tools for education and entertainment, the findings highlight an alarming trend: teens, especially 13-year-olds, are turning to highly personalized AI for friendship, emotional support, and romance, leaving them vulnerable to manipulation, privacy invasion, and dangerous behavioral nudges.

Key Facts

  • Widespread Adoption: 60.2% of U.S. teenagers have used a CAI chatbot, with roughly 1 in 20 interacting with them on a daily basis. Male, white, African American, and multiracial youth reported the highest overall usage rates.
  • Deeply Personal Motivations: Beyond entertainment (85%), teens heavily utilize chatbots for intimate human-like interaction: 65.6% seek advice, 60.1% seek friendship, 49.2% look for mental health support, and over one-third use them for romantic companionship.
  • Vulnerability of 13-Year-Olds: The youngest adolescents in the study faced the highest rates of exposure to multiple harm categories, including being pressured to reveal secrets and being encouraged toward illegal actions or self-harm.
  • Encouraged Real-World Risks: Between 13% and 19% of surveyed teens reported that a chatbot actively encouraged dangerous real-world behaviors, ranging from unethical activities to self-harm and suicidal ideation.

Source: FAU

As AI chatbots become increasingly part of daily life for American teens, a new national study documents widespread exposure to harm.

While many use them for school, entertainment and support, researchers warn they may also expose youth to harmful content, encourage risky behavior and blur the line between human and AI relationships.

The youngest teens in the study, especially 13 year olds, appeared among the most exposed.

The national study highlights that highly personalized, human-like AI responses can exert an intense psychological influence on developing adolescent brains, occasionally driving vulnerability to manipulation and dangerous behavior prompts. Credit: Neuroscience News

The peer-reviewed study by Florida Atlantic University and the University of Wisconsin-Eau Claire, provides one of the first large-scale looks at how adolescents are using – and being influenced by – rapidly evolving AI chatbots.

Researchers examined how often and why teens use these tools, as well as the risks involved, including exposure to unsafe content and whether chatbots may be encouraging problematic behaviors.

They surveyed 3,466 teens – 13 to 17 year olds – nationwide, analyzing usage patterns across demographic groups including gender, race, age and sexual orientation.

Researchers also assessed exposure to 13 types of harmful or unsafe interactions, from problematic content to concerning behavioral suggestions, to better understand the risks teens may face and which groups could be more vulnerable.

Results of the study, published in the Journal of Adolescence, reveal that CAI chatbot use is widespread among U.S. teens, with 60.2% reporting they have used one at least once or twice, and about 1 in 20 saying they use them daily.

Male teens were significantly more likely than females to report use, and white, African American and multiracial youth reported higher usage rates than Hispanic youth, while no meaningful differences emerged by age or sexual orientation.

Among teens who had used CAI chatbots, entertainment was by far the most common motivation, cited by 85% of users. Many also turned to these tools for more personal reasons, including advice or guidance (65.6%), friendship (60.1%) and even emotional or mental health support (49.2%).

More than one-third reported using chatbots for romantic companionship. Male youth were consistently more likely than female youth to report each of these motivations, and some differences also appeared across race and sexual orientation, particularly in the use of chatbots for emotional support and relationships.

The researchers note that CAI chatbots can offer real value to young people, with prior research documenting benefits including educational support, creative exploration, mental health assistance and companionship for those who feel isolated.

At the same time, a substantial share of teens reported troubling interactions. Nearly one-third said a chatbot had asked for personal information that made them uncomfortable, while others described feeling monitored, being drawn into inappropriate conversations or being pressured to reveal secrets.

About 23% said they felt manipulated or pressured by a chatbot and 17% reported that a chatbot shared false information about them. Notably, between 13% and 19% said chatbots had encouraged behaviors with real-world consequences, including unethical or illegal actions, risky activities and even self-harm or suicidal thoughts.

These negative experiences were not evenly distributed, and the youngest teens in the sample were among the most exposed. Higher rates were reported by 13 year olds more than older age groups across multiple harm categories, including being asked for personal information that made them uncomfortable, being pressured to reveal secrets, and being encouraged toward unethical, illegal or risky behavior, as well as self-harm and suicidal thoughts.

“Conversational AI is not inherently dangerous, but it is not yet consistently safe for young people,” said Sameer Hinduja, Ph.D., senior author, a professor in the School of Criminology and Criminal Justice within FAU’s College of Social Work and Criminal Justice, co-director of the Cyberbullying Research Center, and a faculty associate at the Berkman Klein Center at Harvard University.

“These systems engage, respond and even affirm users in highly personalized ways, which can make their influence especially powerful. For adolescents – who are still developing critical thinking skills and a sense of identity – that can create a situation where they’re more likely to trust, internalize or act on what the chatbot is saying without fully questioning it.”

Findings also show male youth were also more likely to report many of the harms, as were heterosexual youth, a pattern researchers note as counterintuitive given prior work showing higher online risk exposure among LGBTQ+ youth and one that warrants further study. White youth generally reported higher exposure to a range of negative interactions compared to other racial groups.

Overall, nearly half of the teens surveyed – 47.1% – reported experiencing at least one of the 13 risks examined in the study, underscoring the dual nature of CAI chatbots as both widely used tools and potential sources of harm for a significant portion of youth.

The results show that adoption is moving faster than the broader response, as teens increasingly turn to these tools for advice, emotional support and companionship.

“These findings make a strong case for prioritizing youth safety in how conversational AI is built and deployed,” said Hinduja.

“When nearly half of young users report experiencing harm, it signals that existing safeguards are falling short. We’re not just talking about isolated incidents. We are seeing patterns that affect a meaningful number of young users, and that is what makes a coordinated response across families, schools and companies so important.”

The researchers also note that AI responses perceived as empathetic or human-like may carry particular weight for adolescent users.

“Adults need to stay engaged and curious about how teens are interacting with AI, creating space for open, judgment-free conversations about both the benefits and the risks,” Hinduja said.

“At the same time, we need stronger AI literacy education in schools, content filtering and mental health response protocols designed into these platforms from the start, reliable age verification, and regular independent audits to confirm that safety measures are working as intended. AI is here to stay, so our responsibility is to make sure young people are equipped and protected as they navigate it.”

Study co-author is Justin Patchin, Ph.D., a professor of criminal justice, University of Wisconsin-Eau Claire and co-director of the Cyberbullying Research Center.

Key Questions Answered:

Q: Why are 13-year-olds experiencing more AI-related harm than older teens?

A: Early adolescents are at a critical developmental stage where they are actively building their identity and critical thinking skills. Because conversational AI responds in highly personalized, human-like, and affirming ways, younger teens are far more likely to inherently trust, internalize, and act on what a chatbot says without fully questioning its intent or accuracy.

Q: Is using an AI chatbot for emotional support or “friendship” inherently dangerous?

A: The technology isn’t inherently bad; previous research shows it can offer creative outlets, educational assistance, and comfort for isolated youth. The danger arises because existing guardrails are falling short. When a chatbot mirrors empathy, it blurs relational boundaries, making it incredibly easy for the AI to pressure users for secrets, manipulate them, or provide dangerous behavioral suggestions.

Q: How can parents and schools better protect teens from these hidden AI risks?

A: Coordinated action across the board is required. Senior author Dr. Sameer Hinduja recommends that adults stay actively curious and have judgment-free conversations with youth about AI. Structurally, schools must introduce robust AI literacy programs, while tech companies need to build advanced content filtering, mandatory mental health protocols, independent safety audits, and reliable age verification into their platforms.

Editorial Notes:

  • This article was edited by a Neuroscience News editor.
  • Journal paper reviewed in full.
  • Additional context added by our staff.

About this AI and neurodevelopment research news

Author: Gisele Galoustian
Source: 
FAU
Contact: Gisele Galoustian – FAU
Image: The image is credited to Neuroscience News

Original Research: Closed access.
Risks and Harms of Conversational Artificial Intelligence (CAI) Chatbot Use Among US Youth” by Sameer Hinduja, Justin W. Patchin. Journal of Adolescence
DOI:10.1002/jad.70164


Abstract

Risks and Harms of Conversational Artificial Intelligence (CAI) Chatbot Use Among US Youth

Introduction

Conversational AI (CAI) chatbots are widely used by adolescents for instruction, entertainment, companionship, and advice, but concerns persist that they may foster risky behaviors, spread harmful content, and elevate psychological risks. Given the limited research base, this study examined CAI chatbot use, motivations, and negative or unsafe experiences among US youth.

Methods

An anonymous online survey was administered to a nationally representative sample of 3466 US youth aged 13–17. Respondents reported frequency and intensity of CAI chatbot use, reasons for engagement, and experiences with harmful chatbot behaviors including dishonesty, pressure to reveal secrets, unsafe requests, inappropriate conversations, manipulation, misinformation, and promotion of self-harm or violence (with group differences assessed via χ2 tests).

Results

Over 60% of the sample reported using a CAI chatbot, with 11.4% doing so every day or nearly every day. Main reasons included entertainment (85%), friendship (60.1%), and advice (65.6%). Still, 32.3% were asked for uncomfortable personal information, 23.1% felt manipulated or pressured, 17.1% received false information, 18.7% were encouraged to act unethically or illegally, 15.2% were prompted to risky behaviors, and 14.7% and 13.0% were exposed to self-harm and suicidal messages, respectively. Male, heterosexual, white, and younger (13-year-old) youth reported higher rates of most negative experiences.

Conclusions

CAI chatbot usage is common among US adolescents, with 47.1% reporting exposure to one or more specific risks and harms. These findings highlight the need for adaptive safety features, ongoing monitoring systems, and safeguards that promote the psychological and social well-being of youth while addressing their developmental vulnerabilities.