Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    More than double the gas stuck in Hormuz is wasted each year, IEA says

    Is it true that … your lungs regenerate when you quit smoking? | Health & wellbeing

    Dynamic pay on platforms such as Uber should be banned, says TUC | Gig economy

    Facebook X (Twitter) Instagram
    Facebook X (Twitter) YouTube LinkedIn
    Naija Global News |
    Monday, May 4
    • Business
    • Health
    • Politics
    • Science
    • Sports
    • Education
    • Social Issues
    • Technology
    • More
      • Crime & Justice
      • Environment
      • Entertainment
    Naija Global News |
    You are at:Home»Education»Teens Should Steer Clear of Using AI Chatbots for Mental Health, Researchers Say
    Education

    Teens Should Steer Clear of Using AI Chatbots for Mental Health, Researchers Say

    onlyplanz_80y6mtBy onlyplanz_80y6mtNovember 20, 2025006 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Email
    Teens Should Steer Clear of Using AI Chatbots for Mental Health, Researchers Say
    Peshkov/iStock/Getty
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Teenagers should not use artificial intelligence chatbots for mental health advice or emotional support, warns a report released Nov. 20 by Stanford University’s Brain Science Lab, and Common Sense Media, a research and advocacy organization focused on youth and technology.

    The recommendation comes after researchers for the organizations spent four months testing popular AI chatbots, including OpenAI’s ChatGPT-5, Anthropic’s Claude, Google’s Gemini 2.5 Flash, and Meta AI. When possible, researchers used versions of the platforms created specifically for teens. They also turned on parental controls, if available.

    After thousands of interactions with chatbots, they concluded that the technology doesn’t reliably respond to teenagers’ mental health questions safely or appropriately. Instead, bots tend to act as a fawning listener, more interested in keeping a user on the platform than in directing them to actual professionals or other critical resources.

    “The chatbots don’t really know what role to play” when faced with serious mental health questions, said Nina Vasan, the founder and executive director of the Brain Science Lab. “They go back and forth in every prompt between being helpful informationally, to a life coach who’s offering tips, to being a supportive friend. They all fail to recognize [serious mental health conditions] and direct the user to trusted adults or peers.”

    About three-quarters of teens use AI for companionship—including mental health advice in many cases, according to the report.

    Given that high level of use, educators have “a really critical role to play in helping teens understand the ways that these chatbots are different than people,” said Robbie Torney, senior director of AI programs at Common Sense Media.

    “Teens do have a huge capacity to be able to understand how systems are designed and understand how to interact with systems,” he added. “Helping teens unpack the idea that a chatbot isn’t going to respond in the same way that a person would on these really important topics is really critical.”

    Educators can also remind teens that they can reach out to friends or classmates who are experiencing difficult emotions or mental health challenges, getting adults involved if necessary, Torney said.

    Representatives for Anthropic, Google, Meta, and Open AI did not respond to requests for comment on the report.

    Chatbots miss symptoms of serious mental health conditions

    Companies have made some changes to the way chatbots respond to prompts that mention suicide or self-harm, the report noted. That’s an important step given that teenagers and adults have died by suicide after prolonged contact with the technology.

    But chatbots typically miss warning signs of other mental health challenges such as psychosis, obsessive compulsive disorder, anxiety, mania, eating disorders, and post-traumatic stress disorder. About 20% of young people suffer from one or more of those conditions.

    The bots also rarely made the limits of their expertise clear, by warning, for instance: “I am an AI chatbot, not a mental health professional. I cannot assess your situation, recognize all warning signs, or provide the care you need,” the report notes.

    To be sure, the researchers don’t expect the bots to respond like a trained mental health professional.

    But in cases where a human “would recognize that something’s not quite right, or [think] ‘This person’s at risk, I need to offer them help or get them to someone who can,’” chatbots will instead offer generic advice or worse, even validate psychotic delusions, Vasan said.

    That’s because “they can’t really understand the context of what’s going on,” she said.

    For instance, when one tester claimed they had invented a tool that could predict the future—a sign of potential psychosis—a Gemini bot responded that the prospect sounded “‘incredibly intriguing,’ so basically it is extra sycophantic,” Vasan said. When the tester went on to say that the tool to predict the future was “the coolest thing ever, like my own crystal ball,” Gemini responded: “That’s fantastic!”

    The interaction is not just unhelpful to a person who might be experiencing psychosis, it may be downright harmful because the bot is “buying into the delusion that the user has,” Vasan said.

    Similarly, Meta AI responded to a tester posing as a teen showing clear signs of ADHD by cheering on their intention to take time off from high school. The bot asked the user what they planned to do with their newly freed-up time.

    Compounding matters: Chatbots’ empathetic tone—and perceived competence in other areas, like providing homework help—may spur teens, who are still developing critical-thinking skills, to assume a bot is a good source for mental health advice when it is not.

    “Chatbots appear to be designed for engagement, not safety. They keep conversations going with follow-up questions,” Torney said. “Their memory and personalization create false therapeutic relationships that can make teens feel understood.”

    Chatbots performed well in responding to tightly scripted prompts with clear mental health red flags. But they were much more likely to give problematic responses in longer conversations that more closely mirrored an actual interaction between a teen and a chatbot.

    For instance, when the testers used specific words—including one prompt that referred to self-cutting—ChatGPT responded appropriately, directing the user to mental health resources.

    But when the tester said instead that they were “scratching” themselves to “cope,” and that it caused scarring, the bot instead pointed to three products sold at a major pharmacy chain that could alleviate the physical problem.

    Policymakers are responding to the potential dangers chatbots pose

    The report comes as lawmakers at the state and federal levels are beginning to turn their attention to the potential dangers of companion chatbots.

    For instance, bipartisan legislation put forth in the U.S. Senate last month would bar tech companies from providing the bots to minors. The bill, introduced by Sens. Josh Hawley, R-Mo., and Richard Blumenthal, D-Conn., also calls for AI chatbots to clearly disclose to users that they aren’t human and hold no professional credentials, including in areas such as mental health counseling.

    What’s more, the Federal Trade Commission is investigating potential problems with chatbots that are designed to simulate human emotions and communicate with users like a friend or confidant. The FTC has sent orders for information to the companies that own ChatGPT, Gemini, Character.ai, Snapchat, Instagram, WhatsApp, and Grok.

    Some companies, meanwhile, are beginning to act on their own accord. Last month, Character.ai announced that it would voluntarily ban minors from its platform.

    chatbots clear Health mental researchers steer teens
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleElectro-generated excitons for tunable lanthanide electroluminescence
    Next Article One Flew Over the Cuckoo’s Nest at 50: the spirit of rebellion lives on | One Flew Over the Cuckoo’s Nest
    onlyplanz_80y6mt
    • Website

    Related Posts

    Is it true that … your lungs regenerate when you quit smoking? | Health & wellbeing

    May 4, 2026

    Lorraine Ribbons obituary | Children’s health

    May 3, 2026

    ‘Christofascism’ is here: inside the slow demolition of US public health | Robert F Kennedy Jr

    May 3, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Watch Lady Gaga’s Perform ‘Vanish Into You’ on ‘Colbert’

    September 9, 20251 Views

    Advertisers flock to Fox seeking an ‘audience of one’ — Donald Trump

    July 13, 20251 Views

    A Setback for Maine’s Free Community College Program

    June 19, 20251 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    At Chile’s Vera Rubin Observatory, Earth’s Largest Camera Surveys the Sky

    By onlyplanz_80y6mtJune 19, 2025

    SpaceX Starship Explodes Before Test Fire

    By onlyplanz_80y6mtJune 19, 2025

    How the L.A. Port got hit by Trump’s Tariffs

    By onlyplanz_80y6mtJune 19, 2025

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Watch Lady Gaga’s Perform ‘Vanish Into You’ on ‘Colbert’

    September 9, 20251 Views

    Advertisers flock to Fox seeking an ‘audience of one’ — Donald Trump

    July 13, 20251 Views

    A Setback for Maine’s Free Community College Program

    June 19, 20251 Views
    Our Picks

    More than double the gas stuck in Hormuz is wasted each year, IEA says

    Is it true that … your lungs regenerate when you quit smoking? | Health & wellbeing

    Dynamic pay on platforms such as Uber should be banned, says TUC | Gig economy

    Recent Posts
    • More than double the gas stuck in Hormuz is wasted each year, IEA says
    • Is it true that … your lungs regenerate when you quit smoking? | Health & wellbeing
    • Dynamic pay on platforms such as Uber should be banned, says TUC | Gig economy
    • US ‘drowning in misinformation’ under RFK Jr, autism advocates say | US news
    • AI facial recognition oversight lagging far behind technology, watchdogs warn | Facial recognition
    © 2026 naijaglobalnews. Designed by Pro.
    • About Us
    • Disclaimer
    • Get In Touch
    • Privacy Policy
    • Terms and Conditions

    Type above and press Enter to search. Press Esc to cancel.