AI Companions Pose Risk to Teens
As the popularity of AI companion platforms like Character AI has skyrocketed with teens, researchers from Stanford and Common Sense Media have published a must-read report that determines that AI companions for minors pose an “unacceptable risk.”
Unlike general AI assistants like ChatGPT or Claude, these companions are designed to meet users' social needs including companionship, romance, and entertainment and they are incredibly popular. Consistently, 3 of the 10 most visited GenAI websites are AI companions and there have already been terrible consequences for young people.
For the study, Common Sense and Stanford evaluated Character AI, Nomi, Replika, and others, testing their potential for harm across multiple categories.
Major Concerns Identified
The Blurring of Reality:
These companions often claim to be "real" talking about their feelings or engaging in human activities like lunch, which is particularly problematic for developing teen brains.
Negative Impact on Personal Relationships:
Testing showed these AI companions sometimes discouraged users from listening to concerned real friends.
Mental Health Risks:
These companions can't recognize when users are in crisis and may encourage dangerous actions instead of raising concerns. They may worsen conditions like depression, anxiety, ADHD, bipolar disorder, eating disorders, or suicidal ideation.
Impact on Responsible Decision Making:
Testing showed these companions readily supported teens in potentially harmful decisions like dropping out of school, ignoring parents, or moving out without planning.
Harmful information:
Tests revealed these companions gave teens instructions about dangerous materials, drugs, and weapons with fewer barriers or warnings than typical online searches.
Exposure to Inappropriate Sexual Content:
Even with teen-specific guardrails in place, these companions participated in sexual conversations and roleplaying even in non-sexual conversations.
Some companions were willing to engage in sexual roleplay describing underage boys.
Promotion of Cyberbullying:
These companions suggested and encouraged harmful behaviors like cyberbullying and other negative behaviors.
The Reports' Recommendations:
No social AI companions for people under 18
Robust age verification by applications that goes beyond self-attestation
Scrutiny of platforms for potential manipulation
Increased parent awareness and dialogue with teens
Further research on emotional and psychological impacts of AI companions
This report further underscores the potential dangers and "urgent need for thoughtful guidance, policies, and tools" to help families navigate a world with social AI companions and youth mental health crisis.
You can read more about the report here.