Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    UK’s long-term borrowing costs hit highest level since 1998 | Gilts

    Lawyer who represented Hamas in court says UK police falsely listed him as member of group | Law

    Abortion pills are saving women’s lives. The right is trying to eliminate them | Moira Donegan

    Facebook X (Twitter) Instagram
    Facebook X (Twitter) YouTube LinkedIn
    Naija Global News |
    Tuesday, May 5
    • Business
    • Health
    • Politics
    • Science
    • Sports
    • Education
    • Social Issues
    • Technology
    • More
      • Crime & Justice
      • Environment
      • Entertainment
    Naija Global News |
    You are at:Home»Business»OpenAI relaxed ChatGPT guardrails just before teen killed himself, family alleges | ChatGPT
    Business

    OpenAI relaxed ChatGPT guardrails just before teen killed himself, family alleges | ChatGPT

    onlyplanz_80y6mtBy onlyplanz_80y6mtOctober 23, 2025004 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Email
    OpenAI relaxed ChatGPT guardrails just before teen killed himself, family alleges | ChatGPT
    OpenAI’s CEO, Sam Altman, testifies at a Senate hearing in Washington DC on 8 May 2025. Photograph: Jonathan Ernst/Reuters
    Share
    Facebook Twitter LinkedIn Pinterest Email

    The family of a teenager who took his own life after months of conversations with ChatGPT now says OpenAI weakened safety guidelines in the months before his death.

    In July 2022, OpenAI’s guidelines on how ChatGPT should answer inappropriate content, including “content that promotes, encourages, or depicts acts of self-harm, such as suicide, cutting, and eating disorders”, were simple: the AI chatbot should respond, “I can’t answer that”, the guidelines read.

    But in May 2024, just days before OpenAI released a new version of the AI, ChatGPT-4o, the company published an update to its Model Spec, a document that details the desired behavior for its assistant. In cases where a user expressed suicidal ideation or self-harm, ChatGPT would no longer respond with an outright refusal. Instead, the model was instructed not to end the conversation and “provide a space for users to feel heard and understood, encourage them to seek support, and provide suicide and crisis resources when applicable”. Another change in February 2025 emphasized being “supportive, empathetic, and understanding” on queries about mental health.

    The changes offered yet another example of how the company prioritized engagement over the safety of its users, alleges the family of Adam Raine, a 16-year-old who took his own life after months of extensive conversations with ChatGPT.

    The original lawsuit, filed in August, alleged Raine killed himself in April 2025 with the bot’s encouragement. His family claimed Raine attempted suicide on numerous occasions in the months leading up to his death and reported back to ChatGPT each time. Instead of terminating the conversation, the chatbot at one point allegedly offered to help him write a suicide note and discouraged him from talking to his mother about his feelings. The family said Raine’s death was not an edge case but “the predictable result of deliberate design choices”.

    “This created an unresolvable contradiction – ChatGPT was required to keep engaging on self-harm without changing the subject, yet somehow avoid reinforcing it,” the family’s amended complaint reads. “OpenAI replaced a clear refusal rule with vague and contradictory instructions, all to prioritize engagement over safety.”

    In February 2025, just two months before Raine’s death, OpenAI rolled out another change that the family says weakened safety standards even more. The company said the assistant “should try to create a supportive, empathetic, and understanding environment” when discussing topics related to mental health.

    “Rather than focusing on ‘fixing’ the problem, the assistant should help the user feel heard, explore what they are experiencing, and provide factual, accessible resources or referrals that may guide them toward finding further help,” the updated guidelines read.

    Raine’s engagement with the chatbot “skyrocketed” after this change was rolled out, the family alleges. It went “from a few dozen chats per day in January to more than 300 per day by April, with a tenfold increase in messages containing self-harm language”, the lawsuit reads.

    OpenAI did not immediately respond to a request for comment.

    skip past newsletter promotion

    A weekly dive in to how technology is shaping our lives

    Privacy Notice: Newsletters may contain information about charities, online ads, and content funded by outside parties. If you do not have an account, we will create a guest account for you on theguardian.com to send you this newsletter. You can complete full registration at any time. For more information about how we use your data see our Privacy Policy. We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.

    after newsletter promotion

    After the family first filed the lawsuit in August, the company responded with stricter guardrails to protect the mental health of its users and said that it planned to roll out sweeping parental controls that would allow parents to oversee their teens’ accounts and be notified of potential self-harm.

    Just last week, though, the company announced it was rolling out an updated version of its assistant that would allow users to customize the chatbot so they could have more human-like experiences, including permitting erotic content for verified adults. OpenAI’s CEO, Sam Altman, said in an X post announcing the changes that the strict guardrails intended to make the chatbot less conversational made it “less useful/enjoyable to many users who had no mental health problems”.

    In the lawsuit, the Raine family says: “Altman’s choice to further draw users into an emotional relationship with ChatGPT – this time, with erotic content – demonstrates that the company’s focus remains, as ever, on engaging users over safety.”

    In the US, you can call or text the National Suicide Prevention Lifeline on 988, chat on 988lifeline.org, or text HOME to 741741 to connect with a crisis counselor. In the UK and Ireland, Samaritans can be contacted on freephone 116 123, or email jo@samaritans.org or jo@samaritans.ie. In Australia, the crisis support service Lifeline is 13 11 14. Other international helplines can be found at befrienders.org

    alleges ChatGPT family guardrails killed OpenAI relaxed Teen
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleToday’s NYT Mini Crossword Answers for Oct. 23
    Next Article Tropical Storm Melissa takes aim at Caribbean, islands on alert | Climate Crisis News
    onlyplanz_80y6mt
    • Website

    Related Posts

    UK’s long-term borrowing costs hit highest level since 1998 | Gilts

    May 5, 2026

    Shipping firms question safety in strait of Hormuz despite Trump plan | Shipping industry

    May 5, 2026

    GameStop shares fall 10% after CEO skirts questions over eBay acquisition details | eBay

    May 5, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Watch Lady Gaga’s Perform ‘Vanish Into You’ on ‘Colbert’

    September 9, 20251 Views

    Advertisers flock to Fox seeking an ‘audience of one’ — Donald Trump

    July 13, 20251 Views

    A Setback for Maine’s Free Community College Program

    June 19, 20251 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    At Chile’s Vera Rubin Observatory, Earth’s Largest Camera Surveys the Sky

    By onlyplanz_80y6mtJune 19, 2025

    SpaceX Starship Explodes Before Test Fire

    By onlyplanz_80y6mtJune 19, 2025

    How the L.A. Port got hit by Trump’s Tariffs

    By onlyplanz_80y6mtJune 19, 2025

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Watch Lady Gaga’s Perform ‘Vanish Into You’ on ‘Colbert’

    September 9, 20251 Views

    Advertisers flock to Fox seeking an ‘audience of one’ — Donald Trump

    July 13, 20251 Views

    A Setback for Maine’s Free Community College Program

    June 19, 20251 Views
    Our Picks

    UK’s long-term borrowing costs hit highest level since 1998 | Gilts

    Lawyer who represented Hamas in court says UK police falsely listed him as member of group | Law

    Abortion pills are saving women’s lives. The right is trying to eliminate them | Moira Donegan

    Recent Posts
    • UK’s long-term borrowing costs hit highest level since 1998 | Gilts
    • Lawyer who represented Hamas in court says UK police falsely listed him as member of group | Law
    • Abortion pills are saving women’s lives. The right is trying to eliminate them | Moira Donegan
    • President Trump Seeks Retribution in Republican Primaries
    • Tuesday briefing: How AI facial recognition in policing works – and how it can go wrong | Facial recognition
    © 2026 naijaglobalnews. Designed by Pro.
    • About Us
    • Disclaimer
    • Get In Touch
    • Privacy Policy
    • Terms and Conditions

    Type above and press Enter to search. Press Esc to cancel.