Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Averett Sells Athletic Facilities

    How the right won the internet | Robert Topinka

    Appropriating the death count: Manufacturing consent for an attack on Iran | Protests

    Facebook X (Twitter) Instagram
    Facebook X (Twitter) YouTube LinkedIn
    Naija Global News |
    Saturday, January 31
    • Business
    • Health
    • Politics
    • Science
    • Sports
    • Education
    • Social Issues
    • Technology
    • More
      • Crime & Justice
      • Environment
      • Entertainment
    Naija Global News |
    You are at:Home»Technology»Anthropic says some Claude models can now end ‘harmful or abusive’ conversations 
    Technology

    Anthropic says some Claude models can now end ‘harmful or abusive’ conversations 

    onlyplanz_80y6mtBy onlyplanz_80y6mtAugust 17, 2025002 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Email
    Anthropic says some Claude models can now end ‘harmful or abusive’ conversations 
    Image Credits:Maxwell Zeff
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Anthropic has announced new capabilities that will allow some of its newest, largest models to end conversations in what the company describes as “rare, extreme cases of persistently harmful or abusive user interactions.” Strikingly, Anthropic says it’s doing this not to protect the human user, but rather the AI model itself.

    To be clear, the company isn’t claiming that its Claude AI models are sentient or can be harmed by their conversations with users. In its own words, Anthropic remains “highly uncertain about the potential moral status of Claude and other LLMs, now or in the future.”

    However, its announcement points to a recent program created to study what it calls “model welfare” and says Anthropic is essentially taking a just-in-case approach, “working to identify and implement low-cost interventions to mitigate risks to model welfare, in case such welfare is possible.”

    This latest change is currently limited to Claude Opus 4 and 4.1. And again, it’s only supposed to happen in “extreme edge cases,” such as “requests from users for sexual content involving minors and attempts to solicit information that would enable large-scale violence or acts of terror.”

    While those types of requests could potentially create legal or publicity problems for Anthropic itself (witness recent reporting around how ChatGPT can potentially reinforce or contribute to its users’ delusional thinking), the company says that in pre-deployment testing, Claude Opus 4 showed a “strong preference against” responding to these requests and a “pattern of apparent distress” when it did so.

    As for these new conversation-ending capabilities, the company says, “In all cases, Claude is only to use its conversation-ending ability as a last resort when multiple attempts at redirection have failed and hope of a productive interaction has been exhausted, or when a user explicitly asks Claude to end a chat.”

    Anthropic also says Claude has been “directed not to use this ability in cases where users might be at imminent risk of harming themselves or others.”

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    When Claude does end a conversation, Anthropic says users will still be able to start new conversations from the same account, and to create new branches of the troublesome conversation by editing their responses.

    “We’re treating this feature as an ongoing experiment and will continue refining our approach,” the company says.

    abusive Anthropic Claude conversations harmful models
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleBuildup to Manchester United v Arsenal, Chelsea v Crystal Palace and more – matchday live | Sport
    Next Article Beside the seaside at Margate – in pictures
    onlyplanz_80y6mt
    • Website

    Related Posts

    How Claude Code is bringing vibe coding to everyone

    January 31, 2026

    World models could unlock the next revolution in artificial intelligence

    January 17, 2026

    Influencers and OnlyFans models are dominating O-1 visa requests: ‘This is the American dream now’ | US immigration

    January 12, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Watch Lady Gaga’s Perform ‘Vanish Into You’ on ‘Colbert’

    September 9, 20251 Views

    Advertisers flock to Fox seeking an ‘audience of one’ — Donald Trump

    July 13, 20251 Views

    A Setback for Maine’s Free Community College Program

    June 19, 20251 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    At Chile’s Vera Rubin Observatory, Earth’s Largest Camera Surveys the Sky

    By onlyplanz_80y6mtJune 19, 2025

    SpaceX Starship Explodes Before Test Fire

    By onlyplanz_80y6mtJune 19, 2025

    How the L.A. Port got hit by Trump’s Tariffs

    By onlyplanz_80y6mtJune 19, 2025

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Watch Lady Gaga’s Perform ‘Vanish Into You’ on ‘Colbert’

    September 9, 20251 Views

    Advertisers flock to Fox seeking an ‘audience of one’ — Donald Trump

    July 13, 20251 Views

    A Setback for Maine’s Free Community College Program

    June 19, 20251 Views
    Our Picks

    Averett Sells Athletic Facilities

    How the right won the internet | Robert Topinka

    Appropriating the death count: Manufacturing consent for an attack on Iran | Protests

    Recent Posts
    • Averett Sells Athletic Facilities
    • How the right won the internet | Robert Topinka
    • Appropriating the death count: Manufacturing consent for an attack on Iran | Protests
    • Madeline Horwath on the mistakes of evolution – cartoon
    • As US influence wanes, the Chinese trade surplus strangles manufacturing across the globe | US economy
    © 2026 naijaglobalnews. Designed by Pro.
    • About Us
    • Disclaimer
    • Get In Touch
    • Privacy Policy
    • Terms and Conditions

    Type above and press Enter to search. Press Esc to cancel.