Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    China approves brain chip to treat paralysis — a world first

    A total hoot! Beautiful birds – in pictures

    Strategic Enrollment, Financial Analyst Quinnipiac U

    Facebook X (Twitter) Instagram
    Facebook X (Twitter) YouTube LinkedIn
    Naija Global News |
    Tuesday, March 17
    • Business
    • Health
    • Politics
    • Science
    • Sports
    • Education
    • Social Issues
    • Technology
    • More
      • Crime & Justice
      • Environment
      • Entertainment
    Naija Global News |
    You are at:Home»Technology»Tech companies and UK child safety agencies to test AI tools’ ability to create abuse images | Artificial intelligence (AI)
    Technology

    Tech companies and UK child safety agencies to test AI tools’ ability to create abuse images | Artificial intelligence (AI)

    onlyplanz_80y6mtBy onlyplanz_80y6mtNovember 12, 2025003 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Email
    Tech companies and UK child safety agencies to test AI tools’ ability to create abuse images | Artificial intelligence (AI)
    Kanishka Narayan, the minister for AI and online safety, said the measure was ‘ultimately stopping abuse before it happens’. Photograph: Maja Smiejkowska/Reuters
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Tech companies and child protection agencies will be given the power to test whether artificial intelligence tools can produce child abuse images under a new UK law.

    The announcement was made as a safety watchdog revealed that reports of AI-generated child sexual abuse material [CSAM] have more than doubled in the past year from 199 in 2024 to 426 in 2025.

    Under the change, the government will give designated AI companies and child safety organisations permission to examine AI models – the underlying technology for chatbots such as ChatGPT and image generators such as Google’s Veo 3 – and ensure they have safeguards to prevent them from creating images of child sexual abuse.

    Kanishka Narayan, the minister for AI and online safety, said the move was “ultimately about stopping abuse before it happens”, adding: “Experts, under strict conditions, can now spot the risk in AI models early.”

    The changes have been introduced because it is illegal to create and possess CSAM, meaning that AI developers and others cannot create such images as part of a testing regime. Until now, the authorities have had to wait until AI-generated CSAM is uploaded online before dealing with it. This law is aimed at heading off that problem by helping to prevent the creation of those images at source.

    The changes are being introduced by the government as amendments to the crime and policing bill, legislation which is also introducing a ban on possessing, creating or distributing AI models designed to generate child sexual abuse material.

    This week Narayan visited the London base of Childline, a helpline for children, and listened to a mock-up of a call to counsellors featuring a report of AI-based abuse. The call portrayed a teenager seeking help after he had been blackmailed by a sexualised deepfake of himself, constructed using AI.

    “When I hear about children experiencing blackmail online, it is a source of extreme anger in me and rightful anger amongst parents,” he said.

    The Internet Watch Foundation, which monitors CSAM online, said reports of AI-generated abuse material – such as a webpage that may contain multiple images – had more than doubled so far this year. Instances of category A material – the most serious form of abuse – rose from 2,621 images or videos to 3,086.

    Girls were overwhelmingly targeted, making up 94% of illegal AI images in 2025, while depictions of newborns to two-year-olds rose from five in 2024 to 92 in 2025.

    Kerry Smith, the chief executive of the Internet Watch Foundation, said the law change could “a vital step to make sure AI products are safe before they are released”.

    “AI tools have made it so survivors can be victimised all over again with just a few clicks, giving criminals the ability to make potentially limitless amounts of sophisticated, photorealistic child sexual abuse material,” she said. “Material which further commodifies victims’ suffering, and makes children, particularly girls, less safe on and off line.”

    Childline also released details of counselling sessions where AI has been mentioned. AI harms mentioned in the conversations include: using AI to rate weight, body and looks; chatbots dissuading children from talking to safe adults about abuse; being bullied online with AI-generated content; and online blackmail using AI-faked images.

    Between April and September this year, Childline delivered 367 counselling sessions where AI, chatbots and related terms were mentioned, four times as many as in the same period last year. Half of the mentions of AI in the 2025 sessions were related to mental health and wellbeing, including using chatbots for support and AI therapy apps.

    ability Abuse Agencies Artificial Child companies Create Images Intelligence Safety tech Test Tools
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleTransfer rumors, news: Man United eye Wolves’ João Gomes
    Next Article House Members Return to D.C. to Vote on Shutdown
    onlyplanz_80y6mt
    • Website

    Related Posts

    Why I had to turn to lawyers as the parent of a child with Send | Special educational needs

    March 16, 2026

    Spaceflight supercharges viruses’ ability to infect bacteria

    March 15, 2026

    New study raises concerns about AI chatbots fueling delusional thinking | AI (artificial intelligence)

    March 14, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Watch Lady Gaga’s Perform ‘Vanish Into You’ on ‘Colbert’

    September 9, 20251 Views

    Advertisers flock to Fox seeking an ‘audience of one’ — Donald Trump

    July 13, 20251 Views

    A Setback for Maine’s Free Community College Program

    June 19, 20251 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    At Chile’s Vera Rubin Observatory, Earth’s Largest Camera Surveys the Sky

    By onlyplanz_80y6mtJune 19, 2025

    SpaceX Starship Explodes Before Test Fire

    By onlyplanz_80y6mtJune 19, 2025

    How the L.A. Port got hit by Trump’s Tariffs

    By onlyplanz_80y6mtJune 19, 2025

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Watch Lady Gaga’s Perform ‘Vanish Into You’ on ‘Colbert’

    September 9, 20251 Views

    Advertisers flock to Fox seeking an ‘audience of one’ — Donald Trump

    July 13, 20251 Views

    A Setback for Maine’s Free Community College Program

    June 19, 20251 Views
    Our Picks

    China approves brain chip to treat paralysis — a world first

    A total hoot! Beautiful birds – in pictures

    Strategic Enrollment, Financial Analyst Quinnipiac U

    Recent Posts
    • China approves brain chip to treat paralysis — a world first
    • A total hoot! Beautiful birds – in pictures
    • Strategic Enrollment, Financial Analyst Quinnipiac U
    • SpaceX reaches milestone of 10,000 Starlink satellites in orbit
    • Chinese-owned Syngenta to build new £100m bioscience hub in UK | Pharmaceuticals industry
    © 2026 naijaglobalnews. Designed by Pro.
    • About Us
    • Disclaimer
    • Get In Touch
    • Privacy Policy
    • Terms and Conditions

    Type above and press Enter to search. Press Esc to cancel.