{"id":36210,"date":"2025-12-06T22:57:54","date_gmt":"2025-12-06T22:57:54","guid":{"rendered":"https:\/\/naijaglobalnews.org\/?p=36210"},"modified":"2025-12-06T22:57:54","modified_gmt":"2025-12-06T22:57:54","slug":"how-close-are-todays-ai-models-to-agi-and-to-self-improving-into-superintelligence","status":"publish","type":"post","link":"https:\/\/naijaglobalnews.org\/?p=36210","title":{"rendered":"How Close Are Today\u2019s AI Models to AGI\u2014And to Self-Improving into Superintelligence?"},"content":{"rendered":"<p>\n<\/p>\n<p class=\"article_pub_date-zPFpJ\">December 6, 2025<\/p>\n<p class=\"article_read_time-ZYXEi\">5 min read<\/p>\n<p> <span class=\"google_cta_text-ykyUj\"><span class=\"google_cta_text_desktop-wtvUj\">Add Us On Google<\/span><span class=\"google_cta_text_mobile-jmni9\">Add SciAm<\/span><\/span><span class=\"google_cta_icon-pdHW3\"\/><\/p>\n<p>Are We Seeing the First Steps Toward AI Superintelligence?<\/p>\n<p>Today\u2019s leading AI models can already write and refine their own software. The question is whether that self-improvement can ever snowball into true superintelligence<\/p>\n<p class=\"article_authors-ZdsD4\">By Deni Ellis B\u00e9chard <span class=\"article_editors__links-aMTdN\">edited by Eric Sullivan<\/span><\/p>\n<p>KTSDESIGN\/SCIENCE PHOTO LIBRARY<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">The Matrix, The Terminator\u2014so much of our science fiction is built around the dangers of superintelligent artificial intelligence: a system that exceeds the best humans across nearly all cognitive domains. OpenAI CEO Sam Altman and Meta CEO Mark Zuckerberg have predicted we\u2019ll achieve such AI in the coming years. Yet machines like those depicted as battling humanity in those movies would have to be far more advanced than ChatGPT, not to mention more capable of making Excel spreadsheets than Microsoft Copilot. So how can anyone think we\u2019re remotely close to artificial superintelligence?<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">One answer goes back to 1965, when statistician Irving John Good introduced the idea of an \u201cultraintelligent machine.\u201d He wrote that once it became sufficiently sophisticated, a computer would rapidly improve itself. If this seems far-fetched, consider how AlphaGo Zero\u2014an AI system developed at DeepMind in 2017 to play the ancient Chinese board game Go\u2014was built. Using no data from human games, AlphaGo Zero played itself millions of times, achieving in days an improvement that would have taken a human a lifetime and that allowed it to defeat the previous versions of AlphaGo that had already beaten the world\u2019s best human players. Good\u2019s idea was that any system that was sufficiently intelligent to rewrite itself would create iterations of itself, each one smarter than the previous and even more capable of improvement, triggering an \u201cintelligence explosion.\u201d<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">The question, then, is how close we are to that first system capable of autonomous self-improvement. Though the runaway systems Good described aren\u2019t here yet, self-improving computers are\u2014at least in narrow domains. AI is already running code on itself. OpenAI\u2019s Codex and Anthropic\u2019s Claude Code can work independently for an hour or more writing new code or updating existing code. Using Codex recently, I thumbed a prompt into my phone while on a walk, and it made a working website before I reached home. In the hands of skilled coders, such systems can do dramatically more, from reorganizing large code bases to sketching entirely new ways to build the software in the first place.<\/p>\n<h2>On supporting science journalism<\/h2>\n<p>If you&#8217;re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">So why hasn\u2019t a model powering ChatGPT quietly coded itself into ultraintelligence? The hitch is in the phrase above: \u201cin the hands of skilled coders.\u201d Despite AI\u2019s impressive improvements, our current systems still rely on humans to set goals, design experiments and decide which changes count as genuine progress. They\u2019re not yet capable of evolving independently in a robust way, which makes some talk about imminent superintelligence seem blown out of proportion\u2014unless, of course, current AI systems are closer than they appear to being able to self-improve in increasingly broad slices of their abilities.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">One area in which they already look superhuman is how much information they can absorb and manipulate. The most advanced models are trained on far more text than any human could read in a lifetime\u2014from poetry to history to the sciences. They can also keep track of far longer stretches of text while they work. Already, with commercially available systems such as ChatGPT and Gemini, I can upload a stack of books and have the AI synthesize and critique them in a way that would take a human weeks. That doesn\u2019t mean the result is always correct or insightful\u2014but it does mean that, in principle, a system like this could read its own documentation, logs, and code and propose changes at a speed and scale no engineering team could match.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Reasoning, however, is where these systems lag\u2014though that\u2019s no longer true in certain focused areas. DeepMind\u2019s AlphaDev and related systems have already found new, more efficient algorithms for tasks such as sorting, results that are now used in real-world code and that go beyond simple statistical mimicry. Other models excel at formal mathematics and graduate-level science questions that resist simple pattern-matching. We can debate the value of any particular benchmark\u2014and researchers are doing exactly that\u2014but there\u2019s no question that some AI systems have become capable of discovering solutions humans had not previously found.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">If the systems already have these abilities, what, then, is the missing piece? One answer is artificial general intelligence (AGI), the sort of dynamic, flexible reasoning that allows humans to learn from one field and apply it to others. As I\u2019ve previously written, we keep shifting our definitions of AGI as machines master new skills. But for the superintelligence question, what matters is not the label we attach; it\u2019s whether a system can use its skills to reliably redesign and upgrade itself.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">And this brings us back to Good\u2019s \u201cintelligence explosion.\u201d If we do build systems with that kind of flexible, humanlike reasoning across many domains, what will separate it from superintelligence? Advanced models are already trained on more science and literature than any human, have far greater working memories and show extraordinary reasoning skills in limited domains. Once that missing piece of flexible reasoning is in place, and once we allow such systems to deploy those skills on their own code, data and training processes, could the leap to fully superhuman performance be shorter than we imagine?<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Not everyone agrees. Some researchers believe we have yet to fundamentally understand intelligence and that this missing piece will take longer than expected to engineer. Others speak of AGI being achieved in a few years, leading to further advances far beyond human capacities. In 2024 Altman publicly suggested that superintelligence could arrive \u201cin a few thousand days.\u201d<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">If this sounds too much like science fiction, consider that AI companies regularly run safety tests on their systems to make sure they can\u2019t go into a runaway self-improvement loop. METR, an independent AI safety group, evaluates models according to how long they can reliably sustain a complex task before reaching failure. This past November, its tests of GPT-5.1-Codex-Max came in around two hours and 42 minutes. This is a huge leap from GPT-4\u2019s few minutes of such performance on the same metric, but it isn\u2019t the situation Good described.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Anthropic runs similar tests on its AI systems. \u201cTo be clear, we are not yet at \u2018self-improving AI,\u2019\u201d wrote the company\u2019s co-founder and head of policy Jack Clark in October, \u201cbut we are at the stage of \u2018AI that improves bits of the next AI, with increasing autonomy.\u2019\u201d<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">If AGI is achieved, and we add human-level judgment to an immense information base, vast working memory and extraordinary speed, Good\u2019s idea of rapid self-improvement starts to look less like science fiction. The real question is whether we\u2019ll stop at \u201cmere human\u201d\u2014or risk overshooting.<\/p>\n<h2 class=\"subscriptionPleaHeading-DMY4w\">It\u2019s Time to Stand Up for Science<\/h2>\n<p class=\"subscriptionPleaText--StZo\">If you enjoyed this article, I\u2019d like to ask for your support. <span class=\"subscriptionPleaItalicFont-i0VVV\">Scientific American<\/span> has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.<\/p>\n<p class=\"subscriptionPleaText--StZo\">I\u2019ve been a <span class=\"subscriptionPleaItalicFont-i0VVV\">Scientific American<\/span> subscriber since I was 12 years old, and it helped shape the way I look at the world. <span class=\"subscriptionPleaItalicFont-i0VVV\">SciAm <\/span>always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.<\/p>\n<p class=\"subscriptionPleaText--StZo\">If you subscribe to <span class=\"subscriptionPleaItalicFont-i0VVV\">Scientific American<\/span>, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.<\/p>\n<p class=\"subscriptionPleaText--StZo\">In return, you get essential news, captivating podcasts, brilliant infographics, can&#8217;t-miss newsletters, must-watch videos, challenging games, and the science world&#8217;s best writing and reporting. You can even gift someone a subscription.<\/p>\n<p class=\"subscriptionPleaText--StZo\">There has never been a more important time for us to stand up and show why science matters. I hope you\u2019ll support us in that mission.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>December 6, 2025 5 min read Add Us On GoogleAdd SciAm Are We Seeing the First Steps Toward AI Superintelligence? Today\u2019s leading AI models can already write and refine their own software. The question is whether that self-improvement can ever snowball into true superintelligence By Deni Ellis B\u00e9chard edited by Eric Sullivan KTSDESIGN\/SCIENCE PHOTO LIBRARY<\/p>\n","protected":false},"author":1,"featured_media":36211,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[58],"tags":[20265,1083,4112,6879,8495,831],"class_list":{"0":"post-36210","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-science","8":"tag-agiand","9":"tag-close","10":"tag-models","11":"tag-selfimproving","12":"tag-superintelligence","13":"tag-todays"},"_links":{"self":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/posts\/36210","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=36210"}],"version-history":[{"count":0,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/posts\/36210\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/media\/36211"}],"wp:attachment":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=36210"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=36210"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=36210"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}