{"id":44929,"date":"2026-02-21T16:04:40","date_gmt":"2026-02-21T16:04:40","guid":{"rendered":"https:\/\/naijaglobalnews.org\/?p=44929"},"modified":"2026-02-21T16:04:40","modified_gmt":"2026-02-21T16:04:40","slug":"anthropics-safety-first-ai-collides-with-the-pentagon-as-claude-expands-into-autonomous-agents","status":"publish","type":"post","link":"https:\/\/naijaglobalnews.org\/?p=44929","title":{"rendered":"Anthropic\u2019s safety-first AI collides with the Pentagon as Claude expands into autonomous agents"},"content":{"rendered":"<p>\n<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">On February 5 Anthropic released Claude Opus 4.6, its most powerful artificial intelligence model. Among the model\u2019s new features is the ability to coordinate teams of autonomous agents\u2014multiple AIs that divide up the work and complete it in parallel. Twelve days after Opus 4.6\u2019s release, the company dropped Sonnet 4.6, a cheaper model that nearly matches Opus\u2019s coding and computer skills. In late 2024, when Anthropic first introduced models that could control computers, they could barely operate a browser. Now Sonnet 4.6 can navigate Web applications and fill out forms with human-level capability, according to Anthropic. And both models have a working memory large enough to hold a small library.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Enterprise customers now make up roughly 80 percent of Anthropic\u2019s revenue, and the company closed a $30-billion funding round last week at a $380-billion valuation. By every available measure, Anthropic is one of the fastest-scaling technology companies in history.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">But behind the big product launches and valuation, Anthropic faces a severe threat: the Pentagon has signaled it may designate the company a \u201csupply chain risk\u201d\u2014a label more often associated with foreign adversaries\u2014unless it drops its restrictions on military use. Such a designation could effectively force Pentagon contractors to strip Claude from sensitive work.<\/p>\n<h2>On supporting science journalism<\/h2>\n<p>If you&#8217;re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Tensions boiled over after January 3, when U.S. special operations forces raided Venezuela and captured Nicol\u00e1s Maduro. The Wall Street Journal reported that forces used Claude during the operation via Anthropic\u2019s partnership with the defense contractor Palantir\u2014and Axios reported that the episode escalated an already fraught negotiation over what, exactly, Claude could be used for. When an Anthropic executive reached out to Palantir to ask whether the technology had been used in the raid, the question raised immediate alarms at the Pentagon. (Anthropic has disputed that the outreach was meant to signal disapproval of any specific operation.) Secretary of Defense Pete Hegseth is \u201cclose\u201d to severing the relationship, a senior administration official told Axios, adding, \u201cWe are going to make sure they pay a price for forcing our hand like this.\u201d<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">The collision exposes a question: Can a company founded to prevent AI catastrophe hold its ethical lines once its most powerful tools\u2014autonomous agents capable of processing vast datasets, identifying patterns and acting on their conclusions\u2014are running inside classified military networks? Is a \u201csafety first\u201d AI compatible with a client that wants systems that can reason, plan and act on their own at military scale?<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Anthropic has drawn two red lines: no mass surveillance of Americans and no fully autonomous weapons. CEO Dario Amodei has said Anthropic will support \u201cnational defense in all ways except those which would make us more like our autocratic adversaries.\u201d Other major labs\u2014OpenAI, Google and xAI\u2014have agreed to loosen safeguards for use in the Pentagon\u2019s unclassified systems, but their tools aren\u2019t yet running inside the military\u2019s classified networks. The Pentagon has demanded that AI be available for \u201call lawful purposes.\u201d<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">The friction tests Anthropic\u2019s central thesis. The company was founded in 2021 by former OpenAI executives who believed the industry was not taking safety seriously enough. They positioned Claude as the ethical alternative. In late 2024 Anthropic made Claude available on a Palantir platform with a cloud security level up to \u201csecret\u201d\u2014making Claude, by public accounts, the first large language model operating inside classified systems.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">The question the standoff now forces is whether safety-first is a coherent identity once a technology is embedded in classified military operations and whether red lines are actually possible. \u201cThese words seem simple: illegal surveillance of Americans,\u201d says Emelia Probasco, a senior fellow at Georgetown\u2019s Center for Security and Emerging Technology. \u201cBut when you get down to it, there are whole armies of lawyers who are trying to sort out how to interpret that phrase.\u201d<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Consider the precedent. After the Edward Snowden revelations, the U.S. government defended the bulk collection of phone metadata\u2014who called whom, when and for how long\u2014arguing that these kinds of data didn\u2019t carry the same privacy protections as the contents of conversations. The privacy debate then was about human analysts searching those records. Now imagine an AI system querying vast datasets\u2014mapping networks, spotting patterns, flagging people of interest. The legal framework we have was built for an era of human review, not machine-scale analysis.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">\u201cIn some sense, any kind of mass data collection that you ask an AI to look at is mass surveillance by simple definition,\u201d says Peter Asaro, co-founder of the International Committee for Robot Arms Control. Axios reported that the senior official \u201cargued there is considerable gray area around\u201d Anthropic\u2019s restrictions \u201cand that it\u2019s unworkable for the Pentagon to have to negotiate individual use-cases with\u201d the company. Asaro offers two readings of that complaint. The generous interpretation is that surveillance is genuinely impossible to define in the age of AI. The pessimistic one, Asaro say, is that \u201cthey really want to use those for mass surveillance and autonomous weapons and don\u2019t want to say that, so they call it a gray area.\u201d<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Regarding Anthropic\u2019s other red line, autonomous weapons, the definition is narrow enough to be manageable\u2014systems that select and engage targets without human supervision. But Asaro sees a more troubling gray zone. He points to the Israeli military\u2019s Lavender and Gospel systems, which have been reported as using AI to generate massive target lists that go to a human operator for approval before strikes are carried out. \u201cYou\u2019ve automated, essentially, the targeting element, which is something [that] we\u2019re very concerned with and [that is] closely related, even if it falls outside the narrow strict definition,\u201d he says. The question is whether Claude, operating inside Palantir\u2019s systems on classified networks, could be doing something similar\u2014processing intelligence, identifying patterns, surfacing persons of interest\u2014without anyone at Anthropic being able to say precisely where the analytical work ends and the targeting begins.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">The Maduro operation tests exactly that distinction. \u201cIf you\u2019re collecting data and intelligence to identify targets, but humans are deciding, \u2018Okay, this is the list of targets we\u2019re actually going to bomb\u2019\u2014then you have that level of human supervision we\u2019re trying to require,\u201d Asaro says. \u201cOn the other hand, you\u2019re still becoming reliant on these AIs to choose these targets, and how much vetting and how much digging into the validity or lawfulness of those targets is a separate question.\u201d<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Anthropic may be trying to draw the line more narrowly\u2014between mission planning, where Claude might help identify bombing targets, and the mundane work of processing documentation. \u201cThere are all of these kind of boring applications of large language models,\u201d Probasco says.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">But the capabilities of Anthropic\u2019s models may make those distinctions hard to sustain. Opus 4.6\u2019s agent teams can split a complex task and work in parallel\u2014an advancement in autonomous data processing that could transform military intelligence. Both Opus and Sonnet can navigate applications, fill out forms and work across platforms with minimal oversight. These features driving Anthropic\u2019s commercial dominance are what make Claude so attractive inside a classified network. A model with a huge working memory can also hold an entire intelligence dossier. A system that can coordinate autonomous agents to debug a code base can coordinate them to map an insurgent supply chain. The more capable Claude becomes, the thinner the line between the analytical grunt work Anthropic is willing to support and the surveillance and targeting it has pledged to refuse.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">As Anthropic pushes the frontier of autonomous AI, the military\u2019s demand for those tools will only grow louder. Probasco fears the clash with the Pentagon creates a false binary between safety and national security. \u201cHow about we have safety and national security?\u201d she asks.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>On February 5 Anthropic released Claude Opus 4.6, its most powerful artificial intelligence model. Among the model\u2019s new features is the ability to coordinate teams of autonomous agents\u2014multiple AIs that divide up the work and complete it in parallel. Twelve days after Opus 4.6\u2019s release, the company dropped Sonnet 4.6, a cheaper model that nearly<\/p>\n","protected":false},"author":1,"featured_media":44930,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[50],"tags":[4733,5493,7889,5495,22214,4331,1230,23227],"class_list":{"0":"post-44929","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-environment","8":"tag-agents","9":"tag-anthropics","10":"tag-autonomous","11":"tag-claude","12":"tag-collides","13":"tag-expands","14":"tag-pentagon","15":"tag-safetyfirst"},"_links":{"self":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/posts\/44929","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=44929"}],"version-history":[{"count":0,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/posts\/44929\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/media\/44930"}],"wp:attachment":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=44929"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=44929"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=44929"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}