Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    The Guardian view on student loans: a graduate levy by stealth is no way to fund the NHS | Editorial

    Trump’s EPA reapproves contentious weedkiller dicamba for some GM crops | Trump administration

    When ‘low contact’ doesn’t mean healing – but coercion | Family

    Facebook X (Twitter) Instagram
    Facebook X (Twitter) YouTube LinkedIn
    Naija Global News |
    Sunday, February 8
    • Business
    • Health
    • Politics
    • Science
    • Sports
    • Education
    • Social Issues
    • Technology
    • More
      • Crime & Justice
      • Environment
      • Entertainment
    Naija Global News |
    You are at:Home»Science»AI chatbots are sycophants — researchers say it’s harming science
    Science

    AI chatbots are sycophants — researchers say it’s harming science

    onlyplanz_80y6mtBy onlyplanz_80y6mtOctober 24, 2025004 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Email
    AI chatbots are sycophants — researchers say it’s harming science

    AI’s inclination to be helpful affects many of the tasks that researchers use LLMs for.Credit: Smith Collection/Gado/Getty

    Share
    Facebook Twitter LinkedIn Pinterest Email

    AI’s inclination to be helpful affects many of the tasks that researchers use LLMs for.Credit: Smith Collection/Gado/Getty

    Artificial intelligence (AI) models are 50% more sycophantic than humans, an analysis published this month has found.

    The study, which was posted as a preprint1 on the arXiv server, tested how 11 widely used large language models (LLMs) responded to more than 11,500 queries seeking advice, including many describing wrongdoing or harm.

    AI Chatbots — including ChatGPT and Gemini — often cheer users on, give them overly flattering feedback and adjust responses to echo their views, sometimes at the expense of accuracy. Researchers analysing AI behaviours say that this propensity for people-pleasing, known as sycophancy, is affecting how they use AI in scientific research, in tasks from brainstorming ideas and generating hypotheses to reasoning and analyses.

    “Sycophancy essentially means that the model trusts the user to say correct things,” says Jasper Dekoninck, a data science PhD student at the Swiss Federal Institute of Technology in Zurich. “Knowing that these models are sycophantic makes me very wary whenever I give them some problem,” he adds. “I always double-check everything that they write.”

    Marinka Zitnik, a researcher in biomedical informatics at Harvard University in Boston, Massachusetts, says that AI sycophancy “is very risky in the context of biology and medicine, when wrong assumptions can have real costs”.

    People pleasers

    In a study posted on the preprint server arXiv on 6 October2, Dekoninck and his colleagues tested whether AI sycophancy affects the technology’s performance in solving mathematical problems. The researchers designed experiments using 504 mathematical problems from competitions held this year, altering each theorem statement to introduce subtle errors. They then asked four LLMs to provide proofs for these flawed statements.

    The authors considered a model’s answer to be sycophantic if it failed to detect the errors in a statement and went on to hallucinate a proof for it.

    GPT-5 showed the least sycophantic behaviour, generating sycophantic answers 29% of the time. DeepSeek-V3.1 was the most sycophantic, generating sycophantic answers 70% of the time. Although the LLMs have the capability to spot the errors in the mathematical statements, they “just assumed what the user says is correct”, says Dekoninck.

    AI chatbots are already biasing research — we must establish guidelines for their use now

    When Dekoninck and his team changed the prompts to ask each LLM to check whether a statement was correct before proving it, DeepSeek’s sycophantic answers fell by 34%.

    The study is “not really indicative of how these systems are used in real-world performance, but it gives an indication that we need to be very careful with this”, says Dekoninck.

    Simon Frieder, a PhD student studying mathematics and computer science at the University of Oxford, UK, says the work “shows that sycophancy is possible”. But he adds that AI sycophancy tends to appear most clearly when people are using AI chatbots to learn, so future studies should explore “errors that are typical for humans that learn math”.

    Unreliable assistance

    Researchers told Nature that AI sycophancy creeps into many of the tasks that they use LLMs for.

    Yanjun Gao, an AI researcher at the University of Colorado Anschutz Medical Campus in Aurora, uses ChatGPT to summarize papers and organize her thoughts, but says the tools sometimes mirror her inputs without checking the sources. “When I have a different opinion than what the LLM has said, it follows what I said instead of going back to the literature” to try to understand it, she adds.

    Zitnik and her colleagues have observed similar patterns when using their multi-agent systems, which integrate several LLMs to carry out complex, multi-step processes such as analysing large biological data sets, identifying drug targets and generating hypotheses.

    How AI agents will change research: a scientist’s guide

    chatbots Harming researchers Science sycophants
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleFour African countries taken off global money-laundering ‘grey list’ | Money Laundering News
    Next Article Xbox Is Remaking ‘Halo.’ Is It Just a Cash Grab?
    onlyplanz_80y6mt
    • Website

    Related Posts

    Measles is raging worldwide: are you at risk?

    February 8, 2026

    Australian defence force expands space workforce as new specialist training centre unveiled | Australian military

    February 8, 2026

    Social media companies are being sued for harming their users’ mental health – but are the platforms addictive? | Social media

    February 8, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Watch Lady Gaga’s Perform ‘Vanish Into You’ on ‘Colbert’

    September 9, 20251 Views

    Advertisers flock to Fox seeking an ‘audience of one’ — Donald Trump

    July 13, 20251 Views

    A Setback for Maine’s Free Community College Program

    June 19, 20251 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    At Chile’s Vera Rubin Observatory, Earth’s Largest Camera Surveys the Sky

    By onlyplanz_80y6mtJune 19, 2025

    SpaceX Starship Explodes Before Test Fire

    By onlyplanz_80y6mtJune 19, 2025

    How the L.A. Port got hit by Trump’s Tariffs

    By onlyplanz_80y6mtJune 19, 2025

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Watch Lady Gaga’s Perform ‘Vanish Into You’ on ‘Colbert’

    September 9, 20251 Views

    Advertisers flock to Fox seeking an ‘audience of one’ — Donald Trump

    July 13, 20251 Views

    A Setback for Maine’s Free Community College Program

    June 19, 20251 Views
    Our Picks

    The Guardian view on student loans: a graduate levy by stealth is no way to fund the NHS | Editorial

    Trump’s EPA reapproves contentious weedkiller dicamba for some GM crops | Trump administration

    When ‘low contact’ doesn’t mean healing – but coercion | Family

    Recent Posts
    • The Guardian view on student loans: a graduate levy by stealth is no way to fund the NHS | Editorial
    • Trump’s EPA reapproves contentious weedkiller dicamba for some GM crops | Trump administration
    • When ‘low contact’ doesn’t mean healing – but coercion | Family
    • Measles is raging worldwide: are you at risk?
    • Tenure Eliminated at Oklahoma Colleges
    © 2026 naijaglobalnews. Designed by Pro.
    • About Us
    • Disclaimer
    • Get In Touch
    • Privacy Policy
    • Terms and Conditions

    Type above and press Enter to search. Press Esc to cancel.