Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Judge temporarily blocks key parts of RFK, Jr.’s effort to overhaul U.S. childhood vaccines

    Data from smart watches reveal early signs of insulin resistance

    The Guardian view on SUVs: London’s mayor is right to push back on supersize cars | Editorial

    Facebook X (Twitter) Instagram
    Facebook X (Twitter) YouTube LinkedIn
    Naija Global News |
    Tuesday, March 17
    • Business
    • Health
    • Politics
    • Science
    • Sports
    • Education
    • Social Issues
    • Technology
    • More
      • Crime & Justice
      • Environment
      • Entertainment
    Naija Global News |
    You are at:Home»Science»Why we should limit the autonomy of AI-enabled weapons
    Science

    Why we should limit the autonomy of AI-enabled weapons

    onlyplanz_80y6mtBy onlyplanz_80y6mtOctober 29, 2025004 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Email
    Why we should limit the autonomy of AI-enabled weapons

    The SkyShark, an autonomous drone built in the United Kingdom, is put on display at the Defence and Security Equipment International exhibition in London.Credit: John Keeble/Getty Images

    Share
    Facebook Twitter LinkedIn Pinterest Email

    The SkyShark, an autonomous drone built in the United Kingdom, is put on display at the Defence and Security Equipment International exhibition in London.Credit: John Keeble/Getty Images

    Weapons capable of identifying and attacking targets automatically have been in use for more than 80 years. An early example is the Mark 24 Fido, a US anti-submarine torpedo equipped with microphones to home in on targets, which was first deployed against German U-boats in 1943.

    Nature Spotlight: Robotics

    Such ‘first-wave’ autonomous systems were designed to be used in narrowly defined scenarios and programmed to act in response to signals such as the radiofrequency emissions of specific targets. The past ten years have seen the development of more advanced systems that can use artificial intelligence to navigate, identify and destroy targets with little or no human intervention. This has led to growing calls from human-rights groups to ban or regulate the technologies.

    Nehal Bhuta, a professor of international law at the University of Edinburgh, UK, has been investigating the legality and ethics of autonomous weapons for more than a decade. He was among the authors of a report on the responsible use of AI presented to the United Nations Security Council last month by Netherlands Prime Minister Dick Schoof.

    Bhuta says that autonomous weapons, especially those that are AI-enabled, raise multiple ethical and legal concerns, including determining responsibility for system failures and potentially encouraging the intrusive collection of civilian data. He says there is still time for the international community to agree on principles and regulations to limit the risk, and warns that an arms race could ensue if it fails to do so.

    Which legal frameworks and principles currently apply to autonomous weapons systems?

    There is no specific legal framework that applies to the use of autonomy or AI in these systems. Under international humanitarian law, based on the Hague Conventions and the Geneva Conventions, which together set out international law on war and war crimes, weapons must be capable of being used in a manner that can distinguish between civilian and military targets. Attacks must not result in disproportionate harm to civilians, and combatants must take precautions to verify they have the right target and reduce the risk of civilian harm. These international laws apply to all weapons, including the use of advanced autonomous systems, such as the drones deployed by Ukraine in June, which used machine learning to select, identify and strike targets deep within Russia on the basis of preprogrammed instructions.

    Professor Nehal Bhuta says it is important for the international community to agree on guidelines regarding the use of autonomous weapons.Credit: Edinburgh Law School

    What are the risks associated with autonomous weapons?

    Insufficient care in their development and deployment could compromise compliance with the principles of distinction and proportionality. Could the system generate too many false positives when identifying targets? Might an autonomous weapon calculate that large numbers of civilian deaths is an acceptable price to pay when targeting a suspected enemy soldier? We don’t really know yet because it’s immature technology, but these are vast risks. There is also a danger that if a system fails to accurately process incoming data in a rapidly changing environment, it could target the wrong forces or civilians.

    To make these systems effective, you have to acquire masses of data, including biometric information, voice calls, e-mails and details of physical movements. That’s a concern if this is done without the consent of those involved. The more you want to do, the more data you need. This creates an incentive to collect data more intrusively.

    Who is legally and ethically responsible when autonomous weapons kill?

    I think it is likely that some sovereign states will in future deploy weapons that are capable of making decisions to kill. The question is whether countries wish to regulate such systems. Effective legal frameworks require ways of attributing responsibility for violations. It can become difficult with complex autonomous weapons systems to identify the individuals responsible for failures and violations.

    The operators of future systems might not be adequately trained in when to ignore a system’s recommendations. They might also develop automation bias, making them unwilling to question a technologically advanced machine. A system could be systematically biased in how it acquires targets — in which case, responsibility would lie somewhere between the developer and the military officials who authorize its use.

    There is a risk that accountability becomes so diffuse that it’s hard to identify the individuals or groups of agents responsible for violations and failures. This is a common problem with complex modern technologies, and I think the answer lies in the adoption of regulatory frameworks for the development and use of autonomous weapons systems.

    What do you say to those who call for a ban on autonomous weapons?

    AIenabled autonomy Limit weapons
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleFed cuts interest rates for second time this year amid economic uncertainty | Federal Reserve
    Next Article Higher Ed Institutions Raise Concerns About H-1B Visa Fee
    onlyplanz_80y6mt
    • Website

    Related Posts

    Data from smart watches reveal early signs of insulin resistance

    March 16, 2026

    Maryland’s crabs are gluttonous cannibals, decades-long study finds

    March 16, 2026

    A petri dish of human brain cells is currently playing Doom. Should we be worried? | Games

    March 16, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Watch Lady Gaga’s Perform ‘Vanish Into You’ on ‘Colbert’

    September 9, 20251 Views

    Advertisers flock to Fox seeking an ‘audience of one’ — Donald Trump

    July 13, 20251 Views

    A Setback for Maine’s Free Community College Program

    June 19, 20251 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    At Chile’s Vera Rubin Observatory, Earth’s Largest Camera Surveys the Sky

    By onlyplanz_80y6mtJune 19, 2025

    SpaceX Starship Explodes Before Test Fire

    By onlyplanz_80y6mtJune 19, 2025

    How the L.A. Port got hit by Trump’s Tariffs

    By onlyplanz_80y6mtJune 19, 2025

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Watch Lady Gaga’s Perform ‘Vanish Into You’ on ‘Colbert’

    September 9, 20251 Views

    Advertisers flock to Fox seeking an ‘audience of one’ — Donald Trump

    July 13, 20251 Views

    A Setback for Maine’s Free Community College Program

    June 19, 20251 Views
    Our Picks

    Judge temporarily blocks key parts of RFK, Jr.’s effort to overhaul U.S. childhood vaccines

    Data from smart watches reveal early signs of insulin resistance

    The Guardian view on SUVs: London’s mayor is right to push back on supersize cars | Editorial

    Recent Posts
    • Judge temporarily blocks key parts of RFK, Jr.’s effort to overhaul U.S. childhood vaccines
    • Data from smart watches reveal early signs of insulin resistance
    • The Guardian view on SUVs: London’s mayor is right to push back on supersize cars | Editorial
    • U.S. Court Rules Against RFK Jr.’s Vaccine Policies
    • Iowa House OKs Bills Affecting Public University Curricula
    © 2026 naijaglobalnews. Designed by Pro.
    • About Us
    • Disclaimer
    • Get In Touch
    • Privacy Policy
    • Terms and Conditions

    Type above and press Enter to search. Press Esc to cancel.