Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Elon Musk’s xAI faces second lawsuit over toxic pollutants from datacenter | Elon Musk

    Anthropic raises $30bn in latest round, valuing Claude bot maker at $380bn | AI (artificial intelligence)

    ‘I Will Not Back Down’: Don Lemon Enters Not Guilty Plea

    Facebook X (Twitter) Instagram
    Facebook X (Twitter) YouTube LinkedIn
    Naija Global News |
    Saturday, February 14
    • Business
    • Health
    • Politics
    • Science
    • Sports
    • Education
    • Social Issues
    • Technology
    • More
      • Crime & Justice
      • Environment
      • Entertainment
    Naija Global News |
    You are at:Home»Social Issues»Chatbots Are Surprisingly Effective at Swaying Voters
    Social Issues

    Chatbots Are Surprisingly Effective at Swaying Voters

    onlyplanz_80y6mtBy onlyplanz_80y6mtDecember 4, 2025007 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Email
    Chatbots Are Surprisingly Effective at Swaying Voters
    Illustration by The Atlantic
    Share
    Facebook Twitter LinkedIn Pinterest Email

    In the months leading up to last year’s presidential election, more than 2,000 Americans, roughly split across partisan lines, were recruited for an experiment: Could an AI model influence their political inclinations? The premise was straightforward—let people spend a few minutes talking with a chatbot designed to stump for Kamala Harris or Donald Trump, then see if their voting preferences changed at all.

    The bots were effective. After talking with a pro-Trump bot, one in 35 people who initially said they would not vote for Trump flipped to saying they would. The number who flipped after talking with a pro-Harris bot was even higher, at one in 21. A month later, when participants were surveyed again, much of the effect persisted. The results suggest that AI “creates a lot of opportunities for manipulating people’s beliefs and attitudes,” David Rand, a senior author on the study, which was published today in Nature, told me.

    Rand didn’t stop with the U.S. general election. He and his co-authors also tested AI bots’ persuasive abilities in highly contested national elections in Canada and Poland—and the effects left Rand, who studies information sciences at Cornell, “completely blown away.” In both of these cases, he said, roughly one in 10 participants said they would change their vote after talking with a chatbot. The AI models took the role of a gentle, if firm, interlocutor, offering arguments and evidence in favor of the candidate they represented. “If you could do that at scale,” Rand said, “it would really change the outcome of elections.”

    The chatbots succeeded in changing people’s minds, in essence, by brute force. A separate companion study that Rand also co-authored, published today in Science, examined what factors make one chatbot more persuasive than another and found that AI models needn’t be more powerful, more personalized, or more skilled in advanced rhetorical techniques to be more convincing. Instead, chatbots were most effective when they threw fact-like claims at the user; the most persuasive AI models were those that provided the most “evidence” in support of their argument, regardless of whether that evidence had any bearing on reality. In fact, the most persuasive chatbots were also the least accurate.

    Independent experts told me that Rand’s two studies join a growing body of research indicating that generative-AI models are, indeed, capable persuaders: These bots are patient, designed to be perceived as helpful, can draw on a sea of evidence, and appear to many as trustworthy. Granted, caveats exist. It’s unclear how many people would ever have such direct, information-dense conversations with chatbots about whom they’re voting for, especially when they’re not being paid to participate in a study. The studies didn’t test chatbots against more forceful types of persuasion, such as a pamphlet or a human canvasser, Jordan Boyd-Graber, an AI researcher at the University of Maryland who was not involved with the research, told me. Traditional campaign outreach (mail, phone calls, television ads, and so on) is typically not effective at swaying voters, Jennifer Pan, a political scientist at Stanford who was not involved with the research, told me. AI could very well be different—the new research suggests that the AI bots were more persuasive than traditional ads in previous U.S. presidential elections—but Pan cautioned that it’s too early to say whether a chatbot with a clear link to a candidate would be of much use.

    Even so, Boyd-Graber said that AI “could be a really effective force multiplier” that allows politicians or activists with relatively few resources to sway far more people—especially if the messaging comes from a familiar platform. Every week, hundreds of millions of people ask questions of ChatGPT, and many more receive AI-written responses to questions through Google search. Meta has woven its AI models throughout Facebook and Instagram, and Elon Musk is using his Grok chatbot to remake X’s recommendation algorithm. AI-generated articles and social-media posts abound. Whether by your own volition or not, a good chunk of the information you’ve learned online over the past year has likely been filtered through generative AI. Clearly, political campaigns will want to use chatbots to sway voters, just as they’ve used traditional advertisements and social media in the past.

    But the new research also raises a separate concern: that chatbots and other AI products, largely unregulated but already a feature of daily life, could be used by tech companies to manipulate users for political purposes. “If Sam Altman decided there was something that he didn’t want people to think, and he wanted GPT to push people in one direction or another,” Rand said, his research suggests that the firm “could do that,” although neither paper specifically explores the possibility.

    Consider Musk, the world’s richest man and the proprietor of the chatbot that briefly referred to itself as “MechaHitler.” Musk has explicitly attempted to mold Grok to fit his racist and conspiratorial beliefs, and has used it to create his own version of Wikipedia. Today’s research suggests that the mountains of sometimes bogus “evidence” that Grok advances may also be enough at least to persuade some people to accept Musk’s viewpoints as fact. The models marshaled “in some cases more than 30 ‘facts’ per conversation,” Kobi Hackenburg, a researcher at the UK AI Security Institute and a lead author on the Science paper, told me. “And all of them sound and look really plausible, and the model deploys them really elegantly and confidently.” That makes it challenging for users to pick apart truth from fiction, Hackenburg said; the performance matters as much as the evidence.

    This is not so different, of course, from all the mis- and disinformation that already circulate online. But unlike Facebook and TikTok feeds, chatbots produce “facts” on command whenever a user asks, offering uniquely formulated evidence in response to queries from anyone. And although everyone’s social-media feeds may look different, they do, at the end of the day, present a noisy mix of media from public sources; chatbots are private and bespoke to the individual. AI already appears “to have pretty significant downstream impacts in shaping what people believe,” Renée DiResta, a social-media and propaganda researcher at Georgetown, told me. There’s Grok, of course, and DiResta has found that the AI-powered search engine on President Donald Trump’s Truth Social, which relies on Perplexity’s technology, appears to pull up sources only from conservative media, including Fox, Just the News, and Newsmax.

    Real or imagined, the specter of AI-influenced campaigns will provide fodder for still more political battles. Earlier this year, Trump signed an executive order banning the federal government from contracting “woke” AI models, such as those incorporating notions of systemic racism. Should chatbots themselves become as polarizing as MSNBC or Fox, they will not change public opinion so much as deepen the nation’s epistemic chasm.

    In some sense, all of this debate over the political biases and persuasive capabilities of AI products is a bit of a distraction. Of course chatbots are designed and able to influence human behavior, and of course that influence is biased in favor of the AI models’ creators—to get you to chat for longer, to click on an advertisement, to generate another video. The real persuasive sleight of hand is to convince billions of human users that their interests align with tech companies’—that using a chatbot, and especially this chatbot above any other, is for the best.

    chatbots effective surprisingly Swaying voters
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleUS lawmakers call for inquiry into second US military strike on alleged Caribbean drug boat | US military
    Next Article CDC Vaccine Panel in Disarray over Hepatitis B Vote
    onlyplanz_80y6mt
    • Website

    Related Posts

    Weatherwatch: The surprisingly complex science of ice skating | Science

    February 13, 2026

    ‘At 2am, it feels like someone’s there’: why Nigerians are choosing chatbots to give them advice and therapy | Global health

    February 12, 2026

    How to stop the survey-taking AI chatbots that threaten to upend social science

    February 9, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Watch Lady Gaga’s Perform ‘Vanish Into You’ on ‘Colbert’

    September 9, 20251 Views

    Advertisers flock to Fox seeking an ‘audience of one’ — Donald Trump

    July 13, 20251 Views

    A Setback for Maine’s Free Community College Program

    June 19, 20251 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    At Chile’s Vera Rubin Observatory, Earth’s Largest Camera Surveys the Sky

    By onlyplanz_80y6mtJune 19, 2025

    SpaceX Starship Explodes Before Test Fire

    By onlyplanz_80y6mtJune 19, 2025

    How the L.A. Port got hit by Trump’s Tariffs

    By onlyplanz_80y6mtJune 19, 2025

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Watch Lady Gaga’s Perform ‘Vanish Into You’ on ‘Colbert’

    September 9, 20251 Views

    Advertisers flock to Fox seeking an ‘audience of one’ — Donald Trump

    July 13, 20251 Views

    A Setback for Maine’s Free Community College Program

    June 19, 20251 Views
    Our Picks

    Elon Musk’s xAI faces second lawsuit over toxic pollutants from datacenter | Elon Musk

    Anthropic raises $30bn in latest round, valuing Claude bot maker at $380bn | AI (artificial intelligence)

    ‘I Will Not Back Down’: Don Lemon Enters Not Guilty Plea

    Recent Posts
    • Elon Musk’s xAI faces second lawsuit over toxic pollutants from datacenter | Elon Musk
    • Anthropic raises $30bn in latest round, valuing Claude bot maker at $380bn | AI (artificial intelligence)
    • ‘I Will Not Back Down’: Don Lemon Enters Not Guilty Plea
    • How AI slop is causing a crisis in computer science
    • Friedrich Merz holds talks with Emmanuel Macron over nuclear deterrence
    © 2026 naijaglobalnews. Designed by Pro.
    • About Us
    • Disclaimer
    • Get In Touch
    • Privacy Policy
    • Terms and Conditions

    Type above and press Enter to search. Press Esc to cancel.