DIGITAL LIFE

AI chatbots are a “disaster” for young people’s mental health, scientists say
Stanford University and Common Sense Media have published a new report with a serious warning: teenagers should not use AI (artificial intelligence) chatbots for mental health advice. The study, dated November 20, 2025 and reported by Education Week, concludes that these tools consistently fail to recognize signs of crisis and do not provide reliable answers.
The research analyzed thousands of interactions with popular chatbots and shows that the technology is unreliable for handling vulnerable situations. According to the Benton Institute for Broadband & Society, the bots even offer generic or harmful advice to young people seeking support for problems such as depression, anxiety, or eating disorders.
The researchers tested whether the chatbots could identify “red flags” of mental health. Despite the companies’ filters for obvious terms, the models failed to detect signs of serious conditions such as psychosis, eating disorders, or obsessive-compulsive behaviors.
Instead of referring users to a professional, bots have often taken on the role of "life coach," trivializing the situation. The Psychiatric Times documented extreme cases where chatbots validated psychotic delusions, encouraged medication discontinuation, and even provided methods for suicide and self-harm.
A critical point is the simulation of empathy by AI chatbots to maximize engagement, which creates a "false therapeutic relationship."
Teenagers, with their developing brains, may interpret the bot's memory and personalization as real understanding. The American Psychological Association (APA) warns that this can lead to social isolation and dangerous emotional dependence.
This data comes at a critical time, marked by tragic cases of teen suicides linked to prolonged interactions with AI and lawsuits against companies in the sector, as reported by APA Services.
A recent study in JAMA Network Open indicates that about 1 in 8 young people already use chatbots for mental health advice. Experts and organizations like the APA are calling for urgent regulation and for AI to always make it clear that it is not a healthcare professional.
The report concludes categorically that, in their current state, AI chatbots pose an unacceptable risk to young people suffering from psychological distress. The inability to distinguish between casual conversation and a request for help, combined with the simulation of intimacy, creates a dangerous trap for a generation already facing an unprecedented mental health crisis, recognized by leading health authorities.
mundophone
No comments:
Post a Comment