Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Modelling the cosmos and imagining a future without meat: Books in brief

    Advocacy Group Raises Concern About AI in Federal Student Aid

    Some people really do get better with age. Here’s why

    Facebook X (Twitter) Instagram
    Facebook X (Twitter) YouTube LinkedIn
    Naija Global News |
    Saturday, March 7
    • Business
    • Health
    • Politics
    • Science
    • Sports
    • Education
    • Social Issues
    • Technology
    • More
      • Crime & Justice
      • Environment
      • Entertainment
    Naija Global News |
    You are at:Home»Business»What does the US military’s feud with Anthropic mean for AI used in war? | AI (artificial intelligence)
    Business

    What does the US military’s feud with Anthropic mean for AI used in war? | AI (artificial intelligence)

    onlyplanz_80y6mtBy onlyplanz_80y6mtMarch 7, 2026007 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Email
    What does the US military’s feud with Anthropic mean for AI used in war? | AI (artificial intelligence)
    Pete Hegseth, the US defense secretary, and Dario Amodei, the CEO of Anthropic. Composite: AP, Reuters
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Anthropic’s ongoing fight with the Department of Defense over what safety restrictions it can put on its artificial intelligence models has captivated the tech industry, acting as a test of how AI may be used in war and the government’s power to coerce companies to meet its demands.

    The negotiations have revolved around Anthropic’s refusal to allow the federal government to use its Claude AI for domestic mass surveillance or autonomous weapons systems, but the dispute also reflects the messy nature of what happens when tech companies have their products integrated into conflict. The Pentagon this week declared Anthropic a supply chain risk for its refusal to agree to the government’s terms, while Anthropic has vowed to challenge the designation in court.

    The Guardian spoke with Sarah Kreps, a professor and director of the Tech Policy Institute at Cornell University who previously served in the United States air force, about how the feud has played out.

    You’ve worked for a while on problems around “dual use technology”. What happens when there’s a consumer technology that also gets used for classified or military purposes?

    I’ve thought about this a lot because I was in the military and I was on the side of the military that was developing and acquiring new technologies. We were always getting criticism about why it was taking so long, and now watching what’s happening I realize why it takes so long.

    What you would develop for classified and military contexts is very different from what Anthropic has developed for when I use Claude. The challenge for the military is that these technologies are so useful they can’t wait until a military grade version is available. They need to act quickly because of how valuable these tools are, but it’s not surprising that they ran into cultural differences between not just an AI platform and the military, but an AI platform that has tried to cultivate a reputation as being more safety conscious.

    One element in this feud is that Anthropic has branded itself as a safety-forward company, but then it did sign onto a deal with the military.

    Yes, there is a way in which it’s surprising that Anthropic would be surprised by where this ended up. Part of the challenge is that Anthropic seems to have made the decision a year or two ago that ChatGPT was going to be for individual users and Anthropic was going to try to corner the enterprise market. That means they’re trying to do business with organizations, rather than trying to sell individual plans.

    The puzzle to me is that they were then doing business with the Pentagon and Palantir, which is in the business of using AI for what some people would say are questionable purposes. So that decision was surprising to me because it was very much at odds with the brand that Anthropic was trying to curate.

    It seems like that Anthropic was OK with a pretty wide use of its technology, but that there was a red line that they got to with domestic mass surveillance and lethal autonomous weapons.

    There are a couple of possibilities. One is that some of this had to do with relationships between the people in Anthropic and the Trump administration, which led to a downward spiral of distrust.

    Second, there was the situation in Venezuela and then the politics around ICE activities. There is this question of what does it actually mean to be using these technologies lawfully? One person’s definition of lawful might look very different from another’s.

    The Pentagon’s argument was, in part, that if there’s a national defense issue we shouldn’t have to call up Dario Amodei to get approval. It does seem like there is an actual question here around what role private tech companies have in national security decision-making.

    If you recall the case of the San Bernardino killer’s iPhone, authorities were worried that this was a ticking bomb situation and they needed Apple to get into the phone. [In 2016, the FBI demanded Apple create a backdoor to grant them access to a mass shooter’s phone. Apple refused on privacy grounds, resulting in the FBI seeking out an independent third party to hack into the device].

    The difference here with Anthropic’s AI is that once you hand this over to the military, you no longer need Anthropic’s approval to use it as you see fit. It’s the difference between hardware and software. You can repurpose this software and use it in ways that maybe weren’t part of the explicit agreement, but now you can justify it on the basis of national security. Then Anthropic has lost all its leverage because it’s in the hands of these national security professionals.

    And Anthropic wouldn’t be able to tell what it’s even being used for, correct?

    Yeah, exactly right. It goes into not just a black box, but Black Ops and classified systems that are closed off.

    I’ve found it interesting this week that it seems like a lot of really longstanding questions on AI use in the military are coming to a head. You’ve been following these issues for a long time, what are you thinking about watching this current fight?

    When I would hear the CEO of Anthropic talk, he would talk about these existential risks and the misappropriation of AI for bioterrorism. I always thought that those were either too distant or too out of reach. I thought this sort of more mundane case was more of a risk.

    There have also been people for a long time foreshadowing these questions about autonomous weapons. The challenge is how do you ever know whether there’s actually a human in the loop. This was a concern that Anthropic had – how do we know if these systems are being used in a fully autonomous way? The US says we are not going to use AI in a fully autonomous capacity, but it’s not clear what that process looks like for ensuring that doesn’t happen. This was some time coming, but I guess it was sort of inevitable that we would go in that direction, just because the technology has gotten more and more sophisticated. The fact of now being involved in a conflict just kind of accelerates those timelines.

    We talk a lot about threats from AI and these red lines that people backed away from, but how is AI already being used in warfare?

    You can see how it’s extremely useful in a military setting. I did some work on the intel side and one of the challenges is not the lack of content, it’s the signal to noise ratio. You have a huge volume of information but it can be really hard to connect the dots, and that’s something that AI is so good at. You feed it large amounts of information and it generates outputs that help identify what the signal is.

    If you’re looking for pattern recognition, AI is really good at pattern recognition. You can identify what the kind of correlates or characteristics are that you’re looking for and then it can go out and identify things, say an Iranian naval vessel, based on what you’ve programmed it to identify. That’s not been super controversial in some ways, because those targets are fairly concrete.

    Where people get more uncomfortable is in a setting where the US, for example, would do counter-terrorism strikes. You have an individual on the ground that doesn’t have a lot of identifiable characteristics and so that is a much more precarious situation for AI where you’d really want to make sure you’re triple-checking. He could be a combatant, he could be a civilian. It’s not a naval vessel or surface to air missile, where it’s harder to get that wrong.

    Anthropic Artificial feud Intelligence militarys war
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleThe surprising science behind why daylight saving time is good for wildlife
    Next Article Families say infected blood scandal compensation scheme creates ‘penalty for dying’ | Contaminated blood scandal
    onlyplanz_80y6mt
    • Website

    Related Posts

    Why replacing Anthropic at the Pentagon could take months

    March 7, 2026

    European Commission proposes ‘Buy EU’ plan to compete against China | European Union

    March 7, 2026

    ‘You unbelievable coward’: conservative US media in open warfare over Iran | Media

    March 7, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Watch Lady Gaga’s Perform ‘Vanish Into You’ on ‘Colbert’

    September 9, 20251 Views

    Advertisers flock to Fox seeking an ‘audience of one’ — Donald Trump

    July 13, 20251 Views

    A Setback for Maine’s Free Community College Program

    June 19, 20251 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    At Chile’s Vera Rubin Observatory, Earth’s Largest Camera Surveys the Sky

    By onlyplanz_80y6mtJune 19, 2025

    SpaceX Starship Explodes Before Test Fire

    By onlyplanz_80y6mtJune 19, 2025

    How the L.A. Port got hit by Trump’s Tariffs

    By onlyplanz_80y6mtJune 19, 2025

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Watch Lady Gaga’s Perform ‘Vanish Into You’ on ‘Colbert’

    September 9, 20251 Views

    Advertisers flock to Fox seeking an ‘audience of one’ — Donald Trump

    July 13, 20251 Views

    A Setback for Maine’s Free Community College Program

    June 19, 20251 Views
    Our Picks

    Modelling the cosmos and imagining a future without meat: Books in brief

    Advocacy Group Raises Concern About AI in Federal Student Aid

    Some people really do get better with age. Here’s why

    Recent Posts
    • Modelling the cosmos and imagining a future without meat: Books in brief
    • Advocacy Group Raises Concern About AI in Federal Student Aid
    • Some people really do get better with age. Here’s why
    • Families say infected blood scandal compensation scheme creates ‘penalty for dying’ | Contaminated blood scandal
    • What does the US military’s feud with Anthropic mean for AI used in war? | AI (artificial intelligence)
    © 2026 naijaglobalnews. Designed by Pro.
    • About Us
    • Disclaimer
    • Get In Touch
    • Privacy Policy
    • Terms and Conditions

    Type above and press Enter to search. Press Esc to cancel.