Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    DeVry Embeds AI Literacy in All Courses

    China is betting on ‘optical’ computer chips – will they power AI?

    Labour chooses Angeliki Stogia for Gorton and Denton byelection | Labour

    Facebook X (Twitter) Instagram
    Facebook X (Twitter) YouTube LinkedIn
    Naija Global News |
    Saturday, January 31
    • Business
    • Health
    • Politics
    • Science
    • Sports
    • Education
    • Social Issues
    • Technology
    • More
      • Crime & Justice
      • Environment
      • Entertainment
    Naija Global News |
    You are at:Home»Politics»Is Russia really ‘grooming’ Western AI? | Media
    Politics

    Is Russia really ‘grooming’ Western AI? | Media

    onlyplanz_80y6mtBy onlyplanz_80y6mtJuly 8, 2025005 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Email
    Is Russia really ‘grooming’ Western AI? | Media
    xAI Grok chatbot and ChatGPT logos are seen in this illustration taken on March 11, 2024 [Dado Ruvic/Reuters]
    Share
    Facebook Twitter LinkedIn Pinterest Email

    In March, NewsGuard – a company that tracks misinformation – published a report claiming that generative Artificial Intelligence (AI) tools, such as ChatGPT, were amplifying Russian disinformation. NewsGuard tested leading chatbots using prompts based on stories from the Pravda network – a group of pro-Kremlin websites mimicking legitimate outlets, first identified by the French agency Viginum. The results were alarming: Chatbots “repeated false narratives laundered by the Pravda network 33 percent of the time”, the report said.

    The Pravda network, which has a rather small audience, has long puzzled researchers. Some believe that its aim was performative – to signal Russia’s influence to Western observers. Others see a more insidious aim: Pravda exists not to reach people, but to “groom” the large language models (LLMs) behind chatbots, feeding them falsehoods that users would unknowingly encounter.

    NewsGuard said in its report that its findings confirm the second suspicion. This claim gained traction, prompting dramatic headlines in The Washington Post, Forbes, France 24, Der Spiegel, and elsewhere.

    But for us and other researchers, this conclusion doesn’t hold up. First, the methodology NewsGuard used is opaque: It did not release its prompts and refused to share them with journalists, making independent replication impossible.

    Second, the study design likely inflated the results, and the figure of 33 percent could be misleading. Users ask chatbots about everything from cooking tips to climate change; NewsGuard tested them exclusively on prompts linked to the Pravda network. Two-thirds of its prompts were explicitly crafted to provoke falsehoods or present them as facts. Responses urging the user to be cautious about claims because they are not verified were counted as disinformation. The study set out to find disinformation – and it did.

    This episode reflects a broader problematic dynamic shaped by fast-moving tech, media hype, bad actors, and lagging research. With disinformation and misinformation ranked as the top global risk among experts by the World Economic Forum, the concern about their spread is justified. But knee-jerk reactions risk distorting the problem, offering a simplistic view of complex AI.

    It’s tempting to believe that Russia is intentionally “poisoning” Western AI as part of a cunning plot. But alarmist framings obscure more plausible explanations – and generate harm.

    So, can chatbots reproduce Kremlin talking points or cite dubious Russian sources? Yes. But how often this happens, whether it reflects Kremlin manipulation, and what conditions make users encounter it are far from settled. Much depends on the “black box” – that is, the underlying algorithm – by which chatbots retrieve information.

    We conducted our own audit, systematically testing ChatGPT, Copilot, Gemini, and Grok using disinformation-related prompts. In addition to re-testing the few examples NewsGuard provided in its report, we designed new prompts ourselves. Some were general – for example, claims about US biolabs in Ukraine; others were hyper-specific – for example, allegations about NATO facilities in certain Ukrainian towns.

    If the Pravda network was “grooming” AI, we would see references to it across the answers chatbots generate, whether general or specific.

    We did not see this in our findings. In contrast to NewsGuard’s 33 percent, our prompts generated false claims only 5 percent of the time. Just 8 percent of outputs referenced Pravda websites – and most of those did so to debunk the content. Crucially, Pravda references were concentrated in queries poorly covered by mainstream outlets. This supports the data void hypothesis: When chatbots lack credible material, they sometimes pull from dubious sites – not because they have been groomed, but because there is little else available.

    If data voids, not Kremlin infiltration, are the problem, then it means disinformation exposure results from information scarcity – not a powerful propaganda machine. Furthermore, for users to actually encounter disinformation in chatbot replies, several conditions must align: They must ask about obscure topics in specific terms; those topics must be ignored by credible outlets; and the chatbot must lack guardrails to deprioritise dubious sources.

    Even then, such cases are rare and often short-lived. Data voids close quickly as reporting catches up, and even when they persist, chatbots often debunk the claims. While technically possible, such situations are very rare outside of artificial conditions designed to trick chatbots into repeating disinformation.

    The danger of overhyping Kremlin AI manipulation is real. Some counter-disinformation experts suggest the Kremlin’s campaigns may themselves be designed to amplify Western fears, overwhelming fact-checkers and counter-disinformation units. Margarita Simonyan, a prominent Russian propagandist, routinely cites Western research to tout the supposed influence of the government-funded TV network, RT, she leads.

    Indiscriminate warnings about disinformation can backfire, prompting support for repressive policies, eroding trust in democracy, and encouraging people to assume credible content is false. Meanwhile, the most visible threats risk eclipsing quieter – but potentially more dangerous – uses of AI by malign actors, such as for generating malware reported by both Google and OpenAI.

    Separating real concerns from inflated fears is crucial. Disinformation is a challenge – but so is the panic it provokes.

    The views expressed in this article are the authors’ own and do not necessarily reflect Al Jazeera’s editorial stance.

    Grooming Media Russia Western
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleTrump delays tariff hikes again but announces new rates for some countries | Trump tariffs
    Next Article Sorry, Baby is a smart film about sexual assault and it’s here at just the right time | Drama films
    onlyplanz_80y6mt
    • Website

    Related Posts

    Labour chooses Angeliki Stogia for Gorton and Denton byelection | Labour

    January 31, 2026

    Drone strikes in Ethiopia’s Tigray kill one amid fears of renewed conflict | Conflict News

    January 31, 2026

    Andrew invited Epstein to Buckingham Palace after child sex offender’s release, files suggest | Andrew Mountbatten-Windsor

    January 31, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Watch Lady Gaga’s Perform ‘Vanish Into You’ on ‘Colbert’

    September 9, 20251 Views

    Advertisers flock to Fox seeking an ‘audience of one’ — Donald Trump

    July 13, 20251 Views

    A Setback for Maine’s Free Community College Program

    June 19, 20251 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    At Chile’s Vera Rubin Observatory, Earth’s Largest Camera Surveys the Sky

    By onlyplanz_80y6mtJune 19, 2025

    SpaceX Starship Explodes Before Test Fire

    By onlyplanz_80y6mtJune 19, 2025

    How the L.A. Port got hit by Trump’s Tariffs

    By onlyplanz_80y6mtJune 19, 2025

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Watch Lady Gaga’s Perform ‘Vanish Into You’ on ‘Colbert’

    September 9, 20251 Views

    Advertisers flock to Fox seeking an ‘audience of one’ — Donald Trump

    July 13, 20251 Views

    A Setback for Maine’s Free Community College Program

    June 19, 20251 Views
    Our Picks

    DeVry Embeds AI Literacy in All Courses

    China is betting on ‘optical’ computer chips – will they power AI?

    Labour chooses Angeliki Stogia for Gorton and Denton byelection | Labour

    Recent Posts
    • DeVry Embeds AI Literacy in All Courses
    • China is betting on ‘optical’ computer chips – will they power AI?
    • Labour chooses Angeliki Stogia for Gorton and Denton byelection | Labour
    • Eton head apologises after former teacher jailed for sexual assault of pupil | Private schools
    • Drone strikes in Ethiopia’s Tigray kill one amid fears of renewed conflict | Conflict News
    © 2026 naijaglobalnews. Designed by Pro.
    • About Us
    • Disclaimer
    • Get In Touch
    • Privacy Policy
    • Terms and Conditions

    Type above and press Enter to search. Press Esc to cancel.