Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Mining made this US tribal area a toxic wasteland. This Indigenous nation brought it back to life | Native Americans

    Row over tuition fees cut for European students threatens Starmer’s EU reset | Brexit

    How a ‘vacuum cleaner turned the other way’ became a popular solution to snoring disorders | Sleep

    Facebook X (Twitter) Instagram
    Facebook X (Twitter) YouTube LinkedIn
    Naija Global News |
    Sunday, March 15
    • Business
    • Health
    • Politics
    • Science
    • Sports
    • Education
    • Social Issues
    • Technology
    • More
      • Crime & Justice
      • Environment
      • Entertainment
    Naija Global News |
    You are at:Home»Technology»Are bad incentives to blame for AI hallucinations?
    Technology

    Are bad incentives to blame for AI hallucinations?

    onlyplanz_80y6mtBy onlyplanz_80y6mtSeptember 7, 2025003 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Email
    ChatGPT logo
    Image Credits:Silas Stein / picture alliance / Getty Images
    Share
    Facebook Twitter LinkedIn Pinterest Email

    A new research paper from OpenAI asks why large language models like GPT-5 and chatbots like ChatGPT still hallucinate, and whether anything can be done to reduce those hallucinations.

    In a blog post summarizing the paper, OpenAI defines hallucinations as “plausible but false statements generated by language models,” and it acknowledges that despite improvements, hallucinations “remain a fundamental challenge for all large language models” — one that will never be completely eliminated.

    To illustrate the point, researchers say that when they asked “a widely used chatbot” about the title of Adam Tauman Kalai’s Ph.D. dissertation, they got three different answers, all of them wrong. (Kalai is one of the paper’s authors.) They then asked about his birthday and received three different dates. Once again, all of them were wrong.

    How can a chatbot be so wrong — and sound so confident in its wrongness? The researchers suggest that hallucinations arise, in part, because of a pretraining process that focuses on getting models to correctly predict the next word, without true or false labels attached to the training statements: “The model sees only positive examples of fluent language and must approximate the overall distribution.”

    “Spelling and parentheses follow consistent patterns, so errors there disappear with scale,” they write. “But arbitrary low-frequency facts, like a pet’s birthday, cannot be predicted from patterns alone and hence lead to hallucinations.”

    The paper’s proposed solution, however, focuses less on the initial pretraining process and more on how large language models are evaluated. It argues that the current evaluation models don’t cause hallucinations themselves, but they “set the wrong incentives.”

    The researchers compare these evaluations to the kind of multiple choice tests random guessing makes sense, because “you might get lucky and be right,” while leaving the answer blank “guarantees a zero.” 

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    “In the same way, when models are graded only on accuracy, the percentage of questions they get exactly right, they are encouraged to guess rather than say ‘I don’t know,’” they say.

    The proposed solution, then, is similar to tests (like the SAT) that include “negative [scoring] for wrong answers or partial credit for leaving questions blank to discourage blind guessing.” Similarly, OpenAI says model evaluations need to “penalize confident errors more than you penalize uncertainty, and give partial credit for appropriate expressions of uncertainty.”

    And the researchers argue that it’s not enough to introduce “a few new uncertainty-aware tests on the side.” Instead, “the widely used, accuracy-based evals need to be updated so that their scoring discourages guessing.”

    “If the main scoreboards keep rewarding lucky guesses, models will keep learning to guess,” the researchers say.

    bad blame hallucinations incentives
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleUS Open tennis 2025: Jannik Sinner v Carlos Alcaraz, men’s singles final – live | US Open tennis
    Next Article Just How Bad Would an AI Bubble Be?
    onlyplanz_80y6mt
    • Website

    Related Posts

    Earth’s days are getting longer. Climate change is to blame

    March 13, 2026

    Why we’re bad at detecting lies, according to scientists—and The Traitors

    March 12, 2026

    Having a stoma bag isn’t usually so bad as it’s been for Tracey Emin | Health

    February 20, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Watch Lady Gaga’s Perform ‘Vanish Into You’ on ‘Colbert’

    September 9, 20251 Views

    Advertisers flock to Fox seeking an ‘audience of one’ — Donald Trump

    July 13, 20251 Views

    A Setback for Maine’s Free Community College Program

    June 19, 20251 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    At Chile’s Vera Rubin Observatory, Earth’s Largest Camera Surveys the Sky

    By onlyplanz_80y6mtJune 19, 2025

    SpaceX Starship Explodes Before Test Fire

    By onlyplanz_80y6mtJune 19, 2025

    How the L.A. Port got hit by Trump’s Tariffs

    By onlyplanz_80y6mtJune 19, 2025

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Watch Lady Gaga’s Perform ‘Vanish Into You’ on ‘Colbert’

    September 9, 20251 Views

    Advertisers flock to Fox seeking an ‘audience of one’ — Donald Trump

    July 13, 20251 Views

    A Setback for Maine’s Free Community College Program

    June 19, 20251 Views
    Our Picks

    Mining made this US tribal area a toxic wasteland. This Indigenous nation brought it back to life | Native Americans

    Row over tuition fees cut for European students threatens Starmer’s EU reset | Brexit

    How a ‘vacuum cleaner turned the other way’ became a popular solution to snoring disorders | Sleep

    Recent Posts
    • Mining made this US tribal area a toxic wasteland. This Indigenous nation brought it back to life | Native Americans
    • Row over tuition fees cut for European students threatens Starmer’s EU reset | Brexit
    • How a ‘vacuum cleaner turned the other way’ became a popular solution to snoring disorders | Sleep
    • What Zootopia 2 gets right about the science of snakes
    • Scientists revive activity in frozen mouse brains for the first time
    © 2026 naijaglobalnews. Designed by Pro.
    • About Us
    • Disclaimer
    • Get In Touch
    • Privacy Policy
    • Terms and Conditions

    Type above and press Enter to search. Press Esc to cancel.