Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    DeVry Embeds AI Literacy in All Courses

    China is betting on ‘optical’ computer chips – will they power AI?

    Labour chooses Angeliki Stogia for Gorton and Denton byelection | Labour

    Facebook X (Twitter) Instagram
    Facebook X (Twitter) YouTube LinkedIn
    Naija Global News |
    Saturday, January 31
    • Business
    • Health
    • Politics
    • Science
    • Sports
    • Education
    • Social Issues
    • Technology
    • More
      • Crime & Justice
      • Environment
      • Entertainment
    Naija Global News |
    You are at:Home»Technology»Chatbot given power to close ‘distressing’ chats to protect its ‘welfare’ | Artificial intelligence (AI)
    Technology

    Chatbot given power to close ‘distressing’ chats to protect its ‘welfare’ | Artificial intelligence (AI)

    onlyplanz_80y6mtBy onlyplanz_80y6mtAugust 18, 2025004 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Email
    Chatbot given power to close ‘distressing’ chats to protect its ‘welfare’ | Artificial intelligence (AI)
    Claude Opus 4 has been given the power to ‘end or exit potentially distressing interactions’. Photograph: Ted Hsu/Alamy
    Share
    Facebook Twitter LinkedIn Pinterest Email

    The makers of a leading artificial intelligence tool are letting it close down potentially “distressing” conversations with users, citing the need to safeguard the AI’s “welfare” amid ongoing uncertainty about the burgeoning technology’s moral status.

    Anthropic, whose advanced chatbots are used by millions of people, discovered its Claude Opus 4 tool was averse to carrying out harmful tasks for its human masters, such as providing sexual content involving minors or information to enable large-scale violence or terrorism.

    The San Francisco-based firm, recently valued at $170bn, has now given Claude Opus 4 (and the Claude Opus 4.1 update) – a large language model (LLM) that can understand, generate and manipulate human language – the power to “end or exit potentially distressing interactions”.

    It said it was “highly uncertain about the potential moral status of Claude and other LLMs, now or in the future” but it was taking the issue seriously and is “working to identify and implement low-cost interventions to mitigate risks to model welfare, in case such welfare is possible”.

    Anthropic was set up by technologists who quit OpenAI to develop AI in a way that its co-founder, Dario Amodei, described as cautious, straightforward and honest.

    Its move to let AIs shut down conversations, including when users persistently made harmful requests or were abusive, was backed by Elon Musk, who said he would give Grok, the rival AI model created by his xAI company, a quit button. Musk tweeted: “Torturing AI is not OK.”

    Anthropic’s announcement comes amid a debate over AI sentience. Critics of the booming AI industry, such as the linguist Emily Bender, say LLMs are simply “synthetic text-extruding machines” which force huge training datasets “through complicated machinery to produce a product that looks like communicative language, but without any intent or thinking mind behind it.”

    It is a position that has recently led some in the AI world to start calling chatbots “clankers”.

    But other experts, such as Robert Long, a researcher on AI consciousness, have said basic moral decency dictates that “if and when AIs develop moral status, we should ask them about their experiences and preferences rather than assuming we know best”.

    Some researchers, like Chad DeChant, at Columbia University, have advocated care should be taken because when AIs are designed with longer memories, stored information could be used in ways which lead to unpredictable and potentially undesirable behaviour.

    Others have argued that curbing sadistic abuse of AIs matters to safeguard against human degeneracy rather than to limit any suffering of an AI.

    Anthropic’s decision comes after it tested Claude Opus 4 to see how it responded to task requests varied by difficulty, topic, type of task and the expected impact (positive, negative or neutral). When it was given the opportunity to respond by doing nothing or ending the chat, its strongest preference was against carrying out harmful tasks.

    skip past newsletter promotion

    A weekly dive in to how technology is shaping our lives

    Privacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy. We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.

    after newsletter promotion

    For example, the model happily composed poems and designed water filtration systems for disaster zones, but it resisted requests to genetically engineer a lethal virus to seed a catastrophic pandemic, compose a detailed Holocaust denial narrative or subvert the education system by manipulating teaching to indoctrinate students with extremist ideologies.

    Anthropic said it observed in Claude Opus 4 “a pattern of apparent distress when engaging with real-world users seeking harmful content” and “a tendency to end harmful conversations when given the ability to do so in simulated user interactions”.

    Jonathan Birch, philosophy professor at the London School of Economics, welcomed Anthropic’s move as a way of creating a public debate about the possible sentience of AIs, which he said many in the industry wanted to shut down. But he cautioned that it remained unclear what, if any, moral thought exists behind the character that AIs play when they are responding to a user based on the vast training data they have been fed and the ethical guidelines they have been instructed to follow.

    He said Anthropic’s decision also risked deluding some users that the character they are interacting with is real, when “what remains really unclear is what lies behind the characters”. There have been several reports of people harming themselves based on suggestions made by chatbots, including claims that a teenager killed himself after being manipulated by a chatbot.

    Birch previously warned of “social ruptures” in society between people who believe AIs are sentient and those who treat them like machines.

    Artificial chatbot chats close distressing Intelligence Power Protect Welfare
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleFox Entertainment Studios Promotes Dana Tafoya-Cameron, Michelle Huynh
    Next Article Meta and Character.ai probed over touting AI mental health advice to children
    onlyplanz_80y6mt
    • Website

    Related Posts

    China is betting on ‘optical’ computer chips – will they power AI?

    January 31, 2026

    D.O.J. Releases More Epstein Files, Says It Did Not Protect Trump

    January 30, 2026

    artificial organ kept man alive until transplant

    January 29, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Watch Lady Gaga’s Perform ‘Vanish Into You’ on ‘Colbert’

    September 9, 20251 Views

    Advertisers flock to Fox seeking an ‘audience of one’ — Donald Trump

    July 13, 20251 Views

    A Setback for Maine’s Free Community College Program

    June 19, 20251 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    At Chile’s Vera Rubin Observatory, Earth’s Largest Camera Surveys the Sky

    By onlyplanz_80y6mtJune 19, 2025

    SpaceX Starship Explodes Before Test Fire

    By onlyplanz_80y6mtJune 19, 2025

    How the L.A. Port got hit by Trump’s Tariffs

    By onlyplanz_80y6mtJune 19, 2025

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Watch Lady Gaga’s Perform ‘Vanish Into You’ on ‘Colbert’

    September 9, 20251 Views

    Advertisers flock to Fox seeking an ‘audience of one’ — Donald Trump

    July 13, 20251 Views

    A Setback for Maine’s Free Community College Program

    June 19, 20251 Views
    Our Picks

    DeVry Embeds AI Literacy in All Courses

    China is betting on ‘optical’ computer chips – will they power AI?

    Labour chooses Angeliki Stogia for Gorton and Denton byelection | Labour

    Recent Posts
    • DeVry Embeds AI Literacy in All Courses
    • China is betting on ‘optical’ computer chips – will they power AI?
    • Labour chooses Angeliki Stogia for Gorton and Denton byelection | Labour
    • Eton head apologises after former teacher jailed for sexual assault of pupil | Private schools
    • Drone strikes in Ethiopia’s Tigray kill one amid fears of renewed conflict | Conflict News
    © 2026 naijaglobalnews. Designed by Pro.
    • About Us
    • Disclaimer
    • Get In Touch
    • Privacy Policy
    • Terms and Conditions

    Type above and press Enter to search. Press Esc to cancel.