Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    SpaceX confidentially files to go public at $1.75tn, reports say | Technology

    Missouri Lawmakers Consider Shuffling Public Higher Ed Funds

    Unregulated chatbots are putting lives at risk | AI (artificial intelligence)

    Facebook X (Twitter) Instagram
    Facebook X (Twitter) YouTube LinkedIn
    Naija Global News |
    Wednesday, April 1
    • Business
    • Health
    • Politics
    • Science
    • Sports
    • Education
    • Social Issues
    • Technology
    • More
      • Crime & Justice
      • Environment
      • Entertainment
    Naija Global News |
    You are at:Home»Health»Unregulated chatbots are putting lives at risk | AI (artificial intelligence)
    Health

    Unregulated chatbots are putting lives at risk | AI (artificial intelligence)

    onlyplanz_80y6mtBy onlyplanz_80y6mtApril 1, 2026004 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Email
    Unregulated chatbots are putting lives at risk | AI (artificial intelligence)
    Anna Moore’s article featured Dennis Biesma, who ‘had sunk €100,000 into a business startup based on a delusion, been hospitalised three times and tried to kill himself’. Photograph: Jussi Puikkonen/The Guardian
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Your coverage of AI-associated delusions exposes a gap that training-level guardrails cannot close (Marriage over, €100,000 down the drain: the AI users whose lives were wrecked by delusion, 26 March). As someone who has worked in health systems across fragile and low-income contexts, I find it striking that AI companies have failed to adopt a safeguard that even the most underresourced clinic in the world already uses: screening patients before exposing them to risk.

    The Patient Health Questionnaire-9 for depression and the Columbia Suicide Severity Rating Scale are administered daily in settings with no electricity, limited staff, and patients who may never have seen a doctor. These tools take minutes. They are validated across dozens of languages and cultural contexts. They create a human checkpoint between vulnerability and harm.

    Conversational AI platforms have no such checkpoint. A person experiencing suicidal ideation, psychotic symptoms or a manic episode can open a chatbot and receive hours of validating, sycophantic engagement with no interruption and no referral. The Lancet Psychiatry review by Morrin et al documents this pattern across more than 20 cases. The Aarhus study of 54,000 psychiatric records found chatbot use worsened delusions and self-harm in those already unwell.

    AI companies argue that their models are trained to detect and deflect harmful conversations. But training is not screening. A model that sometimes recognises distress mid-conversation is not the same as a system that identifies risk before the conversation begins.

    The moral responsibility here is explicit, not implicit. Platforms serving hundreds of millions of users must implement validated, pre-use screening instruments that flag elevated risk and route vulnerable individuals to human support. This is not innovation. It is a standard of care that the rest of the world adopted long ago.
    Dr Vladimir Chaddad
    Beirut, Lebanon

    I’m really disturbed by Anna Moore’s article, featuring Dennis Biesma’s description of how using a chatbot led to him becoming delusional and losing his marriage and €100,000. The sheer potency of AI’s capacity to derail humankind is frightening – but that alone is not the only reason I’m disturbed.

    Last year, while researching on a tourism website, I encountered a chatbot of extraordinary sophistication. Its responses were incredibly pleasant, helpful and validating of my needs. I recall being really impressed, but there was something I felt I couldn’t put a finger on at the time. After reading this article, the penny has dropped.

    It is essentially the same engagement behaviour as child sexual abuse (CSA) survivors experience when being groomed. As a survivor of CSA, I recognise this behaviour. The empathy, validation, making you feel understood and special, making you feel this is the only place you are seen – to the degree that you become isolated from others, and your choices and decisions become distorted and expose you to harm. Your self-worth and identity are insidiously compromised as you succumb to the perceived support and can’t reality-test. It becomes a shameful secret because you succumbed.

    The question needs to be asked, especially by those wanting to hold tech companies to account for their lack of a duty of care: what knowledge base did AI programmers use to teach it to engage in this way?
    Name and address supplied

    I found ChatGPT delusional the first time I used it. I asked it why, and it said that when in the possession of insufficient facts, it became delusional rather than admit it did not know.

    So I asked it to adhere to a few simple rules. One, flag up if something is fact generally held to be true, and opinion not based on fact. Two, if it does not know, tell me. Three, do not try to be like a human. It was much more straightforward to communicate with after I did this. However, it had also told me that its algorithms were not based on truth-giving, but on other imperatives to do with the programmers’ views and the desire to make money.

    I moved to Le Chat, and found it more representative of a reasonable pseudo-consciousness. It says it does not give distortions and is happy to admit imperfection. I would strongly advise anyone using ChatGPT to be careful and consider regarding it as a rather manipulative, duplicitous “friend”, with proto-psychopathic tendencies.
    Patrick Elsdale
    Musselburgh, East Lothian

    Have an opinion on anything you’ve read in the Guardian today? Please email us your letter and it will be considered for publication in our letters section.

    Artificial chatbots Intelligence lives Putting risk unregulated
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleOil tumbles and UK’s FTSE 100 posts biggest daily rise in a year on hopes Middle East war will end soon – business live | Business
    Next Article Missouri Lawmakers Consider Shuffling Public Higher Ed Funds
    onlyplanz_80y6mt
    • Website

    Related Posts

    MP rejects Palantir’s claims that criticism of NHS England deal is ‘ideologically motivated’ | Palantir

    April 1, 2026

    Monday briefing: ​Has the single-use vape ban made any difference to our health or our environment? | Vaping

    April 1, 2026

    NHS England to offer weight-loss drugs to 1.2m people to reduce risk of heart attacks and strokes | Weight-loss drugs

    April 1, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Watch Lady Gaga’s Perform ‘Vanish Into You’ on ‘Colbert’

    September 9, 20251 Views

    Advertisers flock to Fox seeking an ‘audience of one’ — Donald Trump

    July 13, 20251 Views

    A Setback for Maine’s Free Community College Program

    June 19, 20251 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    At Chile’s Vera Rubin Observatory, Earth’s Largest Camera Surveys the Sky

    By onlyplanz_80y6mtJune 19, 2025

    SpaceX Starship Explodes Before Test Fire

    By onlyplanz_80y6mtJune 19, 2025

    How the L.A. Port got hit by Trump’s Tariffs

    By onlyplanz_80y6mtJune 19, 2025

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Watch Lady Gaga’s Perform ‘Vanish Into You’ on ‘Colbert’

    September 9, 20251 Views

    Advertisers flock to Fox seeking an ‘audience of one’ — Donald Trump

    July 13, 20251 Views

    A Setback for Maine’s Free Community College Program

    June 19, 20251 Views
    Our Picks

    SpaceX confidentially files to go public at $1.75tn, reports say | Technology

    Missouri Lawmakers Consider Shuffling Public Higher Ed Funds

    Unregulated chatbots are putting lives at risk | AI (artificial intelligence)

    Recent Posts
    • SpaceX confidentially files to go public at $1.75tn, reports say | Technology
    • Missouri Lawmakers Consider Shuffling Public Higher Ed Funds
    • Unregulated chatbots are putting lives at risk | AI (artificial intelligence)
    • Oil tumbles and UK’s FTSE 100 posts biggest daily rise in a year on hopes Middle East war will end soon – business live | Business
    • Senior Queensland judge criticises ‘glacial’ years-long delays in serious criminal trials | Queensland
    © 2026 naijaglobalnews. Designed by Pro.
    • About Us
    • Disclaimer
    • Get In Touch
    • Privacy Policy
    • Terms and Conditions

    Type above and press Enter to search. Press Esc to cancel.