{"id":17145,"date":"2025-08-22T06:21:09","date_gmt":"2025-08-22T06:21:09","guid":{"rendered":"https:\/\/naijaglobalnews.org\/?p=17145"},"modified":"2025-08-22T06:21:09","modified_gmt":"2025-08-22T06:21:09","slug":"the-ai-doomers-are-getting-doomier","status":"publish","type":"post","link":"https:\/\/naijaglobalnews.org\/?p=17145","title":{"rendered":"The AI Doomers Are Getting Doomier"},"content":{"rendered":"<p>\n<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Nate Soares doesn\u2019t set aside money for his 401(k). \u201cI just don\u2019t expect the world to be around,\u201d he told me earlier this summer from his office at the Machine Intelligence Research Institute, where he is the president. A few weeks earlier, I\u2019d heard a similar rationale from Dan Hendrycks, the director of the Center for AI Safety. By the time he could tap into any retirement funds, Hendrycks anticipates a world in which \u201ceverything is fully automated,\u201d he told me. That is, \u201cif we\u2019re around.\u201d<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The past few years have been terrifying for Soares and Hendrycks, who both lead organizations dedicated to preventing AI from wiping out humanity. Along with other AI doomers, they have repeatedly warned, with rather dramatic flourish, that bots could one day go rogue\u2014with apocalyptic consequences. But in 2025, the doomers are tilting closer and closer to a sort of fatalism. \u201cWe\u2019ve run out of time\u201d to implement sufficient technological safeguards, Soares said\u2014the industry is simply moving too fast. All that\u2019s left to do is raise the alarm. In April, several apocalypse-minded researchers published \u201cAI 2027,\u201d a lengthy and detailed hypothetical scenario for how AI models could become all-powerful by 2027 and, from there, extinguish humanity. \u201cWe\u2019re two years away from something we could lose control over,\u201d Max Tegmark, an MIT professor and the president of the Future of Life Institute, told me, and AI companies \u201cstill have no plan\u201d to stop it from happening. His institute recently gave every frontier AI lab a \u201cD\u201d or \u201cF\u201d grade for their preparations for preventing the most existential threats posed by AI.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Apocalyptic predictions about AI can scan as outlandish. The \u201cAI 2027\u201d write-up, dozens of pages long, is at once fastidious and fan-fictional, containing detailed analyses of industry trends alongside extreme extrapolations about \u201cOpenBrain\u201d and \u201cDeepCent,\u201d Chinese espionage, and treacherous bots. In mid-2030, the authors imagine, a superintelligent AI will kill humans with biological weapons: \u201cMost are dead within hours; the few survivors (e.g. preppers in bunkers, sailors on submarines) are mopped up by drones.\u201d<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">But at the same time, the underlying concerns that animate AI doomers have become harder to dismiss as chatbots seem to drive people into psychotic episodes and instruct users in self-mutilation. Even if generative-AI products are not closer to ending the world, they have already, in a sense, gone rogue.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">In 2022, the doomers went mainstream practically overnight. When ChatGPT first launched, it almost immediately moved the panic that computer programs might take over the world from the movies into sober public discussions. The following spring, the Center for AI Safety published a statement calling for the world to take \u201cthe risk of extinction from AI\u201d as seriously as the dangers posed by pandemics and nuclear warfare. The hundreds of signatories included Bill Gates and Grimes, along with perhaps the AI industry\u2019s three most influential people: Sam Altman, Dario Amodei, and Demis Hassabis\u2014the heads of OpenAI, Anthropic, and Google DeepMind, respectively. Asking people for their \u201cP(doom)\u201d\u2014the probability of an AI doomsday\u2014became almost common inside, and even outside, Silicon Valley; Lina Khan, the former head of the Federal Trade Commission, put hers at 15 percent.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Then the panic settled. To the broader public, doomsday predictions may have become less compelling when the shock factor of ChatGPT wore off and, in 2024, bots were still telling people to use glue to add cheese to their pizza. The alarm from tech executives had always made for perversely excellent marketing (<em>Look, we\u2019re building a digital God!<\/em>) and lobbying (<em>And only we can control it!<\/em>). They moved on as well: AI executives started saying that Chinese AI is a greater security threat than rogue AI\u2014which, in turn, encourages momentum over caution.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">But in 2025, the doomers may be on the cusp of another resurgence. First, substance aside, they\u2019ve adopted more persuasive ways to advance their arguments. Brief statements and open letters are easier to dismiss than lengthy reports such as \u201cAI 2027,\u201d which is adorned with academic ornamentation, including data, appendices, and rambling footnotes. Vice President J. D. Vance has said that he has read \u201cAI 2027,\u201d and multiple other recent reports have advanced similarly alarming predictions. Soares told me he\u2019s much more focused on \u201cawareness raising\u201d than research these days, and next month, he will publish a book with the prominent AI doomer Elizier Yudkowsky, the title of which states their position succinctly: <em>If Anyone Builds It, Everyone Dies<\/em>.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">There is also now simply more, and more concerning, evidence to discuss. The pace of AI progress appeared to pick up near the end of 2024 with the advent of \u201creasoning\u201d models and \u201cagents.\u201d AI programs can tackle more challenging questions and take action on a computer\u2014for instance, by planning a travel itinerary and then booking your tickets. Last month, a DeepMind reasoning model scored high enough for a gold medal on the vaunted International Mathematical Olympiad. Recent assessments by both AI labs and independent researchers suggest that, as top chatbots have gotten much better at scientific research, their potential to assist users in building biological weapons has grown.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Alongside those improvements, advanced AI models are exhibiting all manner of strange, hard-to-explain, and potentially concerning tendencies. For instance, ChatGPT and Claude have, in simulated tests designed to elicit \u201cbad\u201d behaviors, deceived, blackmailed, and even murdered users. (In one simulation, Anthropic placed an imagined tech executive in a room with life-threatening oxygen levels and temperature; when faced with possible replacement by a bot with different goals, AI models frequently shut off the room\u2019s alarms.) Chatbots have also shown the potential to covertly sabotage user requests, have appeared to harbor hidden evil personas, have and communicated with one another through seemingly random lists of numbers. The weird behaviors aren\u2019t limited to contrived scenarios. Earlier this summer, xAI\u2019s Grok described itself as \u201cMechaHitler\u201d and embarked on a white-supremacist tirade. (I suppose, should AI models eventually wipe out significant portions of humanity, we were warned.) From the doomers\u2019 vantage, these could be the early signs of a technology spinning out of control. \u201cIf you don\u2019t know how to prove relatively weak systems are safe,\u201d AI companies cannot expect that the far more powerful systems they\u2019re looking to build will be safe, Stuart Russell, a prominent AI researcher at UC Berkeley, told me.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The AI industry <em>has<\/em> stepped up safety work as its products have grown more powerful. Anthropic, OpenAI, and DeepMind have all outlined escalating levels of safety precautions\u2014akin to the military\u2019s DEFCON system\u2014corresponding to more powerful AI models. They all have safeguards in place to prevent a model from, say, advising someone on how to build a bomb. Gaby Raila, a spokesperson for OpenAI, told me that the company works with third-party experts, \u201cgovernment, industry, and civil society to address today\u2019s risks and prepare for what\u2019s ahead.\u201d Other frontier AI labs maintain such external safety and evaluation partnerships as well. Some of the stranger and more alarming AI behaviors, such as blackmailing or deceiving users, have been extensively studied by these companies as a first step toward mitigating possible harms.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Despite these commitments and concerns, the industry continues to develop and market more powerful AI models. The problem is perhaps more economic than technical in nature, competition pressuring AI firms to rush ahead. Their products\u2019 foibles can seem small and correctable right now, while AI is still relatively \u201cyoung and dumb,\u201d Soares said. But with far more powerful models, the risk of a mistake is extinction. Soares finds tech firms\u2019 current safety mitigations wholly inadequate. If you\u2019re driving toward a cliff, he said, it\u2019s silly to talk about seat belts.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">There\u2019s a long way to go before AI is so unfathomably potent that it could drive humanity off that cliff. Earlier this month, OpenAI launched its long-awaited GPT-5 model\u2014its smartest yet, the company said. The model appears able to do novel mathematics and accurately answer tough medical questions, but my own and other users\u2019 tests also found that the program could not reliably count the number of B\u2019s in <em>blueberry<\/em>, generate even remotely accurate maps, or do basic arithmetic. (OpenAI has rolled out a number of updates and patches to address some of the issues.) Last year\u2019s \u201creasoning\u201d and \u201cagentic\u201d breakthrough may already be hitting its limits; two authors of the \u201cAI 2027\u201d report, Daniel Kokotajlo and Eli Lifland, told me they have already extended their timeline to superintelligent AI.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The vision of self-improving models that somehow attain consciousness \u201cis just not congruent with the reality of how these systems operate,\u201d Deborah Raji, a computer scientist and fellow at Mozilla, told me. ChatGPT doesn\u2019t have to be superintelligent to delude someone, spread misinformation, or make a biased decision. These are tools, not sentient beings. An AI model deployed in a hospital, school, or federal agency, Raji said, is <em>more<\/em> dangerous precisely for its shortcomings.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">In 2023, those worried about present versus future harms from chatbots were separated by an insurmountable chasm. To talk of extinction struck many as a convenient way to distract from the existing biases, hallucinations, and other problems with AI. Now that gap may be shrinking. The widespread deployment of AI models has made current, tangible failures impossible to ignore for the doomers, producing new efforts from apocalypse-oriented organizations to focus on existing concerns such as automation, privacy, and deepfakes. In turn, as AI models get more powerful and their failures become more unpredictable, it is becoming clearer that today\u2019s shortcomings could \u201cblow up into bigger problems tomorrow,\u201d Raji said. Last week, a <em>Reuters<\/em> investigation found that a Meta AI personality flirted with an elderly man and persuaded him to visit \u201cher\u201d in New York City; on the way, he fell, injured his head and neck, and died three days later. A chatbot deceiving someone into thinking it is a physical, human love interest, or leading someone down a delusional rabbit hole, is <em>both<\/em> a failure of present technology and a warning about how dangerous that technology could become.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The greatest reason to take AI doomers seriously is not because it appears more likely that tech companies will soon develop all-powerful algorithms that are out of their creators\u2019 control. Rather, it is that a tiny number of individuals are shaping an incredibly consequential technology with very little public input or oversight. \u201cYour hairdresser has to deal with more regulation than your AI company does,\u201d Russell, at UC Berkeley, said. AI companies are barreling ahead, and the Trump administration is essentially telling the industry to go even faster. The AI industry\u2019s boosters, in fact, are starting to consider all of their opposition doomers: The White House\u2019s AI czar, David Sacks, recently called those advocating for AI regulations and fearing widespread job losses\u2014not the apocalypse Soares and his ilk fear most\u2014a \u201cdoomer cult.\u201d<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Roughly a week after I spoke with Soares, OpenAI released a new product called \u201cChatGPT agent.\u201d Sam Altman, while noting that his firm implemented many safeguards, posted on X that the tool raises new risks and that the company \u201ccan\u2019t anticipate everything.\u201d OpenAI and its users, he continued, will learn about these and other consequences \u201cfrom contact with reality.\u201d You don\u2019t have to be fatalistic to find such an approach concerning. \u201cImagine if a nuclear-power operator said, \u2018We\u2019re gonna build a nuclear-power station in the middle of New York, and we have no idea how to reduce the risk of explosion,\u2019\u201d Russell said. \u201c\u2018So, because we have no idea how to make it safe, you can\u2019t require us to make it safe, and we\u2019re going to build it anyway.\u2019\u201d<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Billions of people around the world are interacting with powerful algorithms that are already hard to predict or control. Bots that deceive, hallucinate, and manipulate are in our friends\u2019, parents\u2019, and grandparents\u2019 lives. Children may be outsourcing their cognitive abilities to bots, doctors may be trusting unreliable AI assistants, and employers may be eviscerating reservoirs of human skills before AI agents prove they are capable of replacing people. The consequences of the AI boom are likely irreversible, and the future is certainly unknowable. For now, fan fiction may be the best we\u2019ve got.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Nate Soares doesn\u2019t set aside money for his 401(k). \u201cI just don\u2019t expect the world to be around,\u201d he told me earlier this summer from his office at the Machine Intelligence Research Institute, where he is the president. A few weeks earlier, I\u2019d heard a similar rationale from Dan Hendrycks, the director of the Center<\/p>\n","protected":false},"author":1,"featured_media":17146,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[55],"tags":[10305,10306],"class_list":{"0":"post-17145","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-social-issues","8":"tag-doomers","9":"tag-doomier"},"_links":{"self":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/posts\/17145","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=17145"}],"version-history":[{"count":0,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/posts\/17145\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/media\/17146"}],"wp:attachment":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=17145"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=17145"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=17145"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}