Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    The surprising science behind why daylight saving time is good for wildlife

    The missing pieces of menopause science

    53 Medical Schools Pledge to Beef Up Nutrition Education

    Facebook X (Twitter) Instagram
    Facebook X (Twitter) YouTube LinkedIn
    Naija Global News |
    Saturday, March 7
    • Business
    • Health
    • Politics
    • Science
    • Sports
    • Education
    • Social Issues
    • Technology
    • More
      • Crime & Justice
      • Environment
      • Entertainment
    Naija Global News |
    You are at:Home»Science»‘Tiny’ AI model beats massive LLMs at logic test
    Science

    ‘Tiny’ AI model beats massive LLMs at logic test

    onlyplanz_80y6mtBy onlyplanz_80y6mtNovember 13, 2025004 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Email
    'Tiny' AI model beats massive LLMs at logic test

    A Tiny Reasoning Model beat Large Language Models in solving logic puzzles, despite being trained on a much smaller dataset. Credit: Getty

    Share
    Facebook Twitter LinkedIn Pinterest Email

    A Tiny Reasoning Model beat Large Language Models in solving logic puzzles, despite being trained on a much smaller dataset. Credit: Getty

    A small-scale artificial-intelligence model that learns from only a limited pool of data is exciting researchers for its potential to boost reasoning abilities. The model, known as Tiny Recursive Model (TRM), outperformed some of the world’s best large language models (LLMs) at the Abstract and Reasoning Corpus for Artificial General Intelligence (ARC-AGI), a test involving visual logic puzzles that is designed to flummox most machines.

    The model — detailed in a preprint on the arXiv server last month1 — is not readily comparable to an LLM. It is highly specialized, excelling only on the type of logic puzzles on which it is trained, such as sudokus and mazes, and it doesn’t ‘understand’ or generate language. But its ability to perform so well on so few resources — it is 10,000 times smaller than frontier LLMs — suggests a possible route for boosting this capability more widely in AI, say researchers.

    “It’s fascinating research into other forms of reasoning that one day might get used in LLMs,” says Cong Lu, a machine-learning researcher formerly at the University of British Columbia in Vancouver, Canada. However, he cautions that the techniques might no longer be as effective if applied on a much larger scale. “Often techniques work very well at small model sizes and then just stop working,” at a bigger scale, he says.

    A test of artificial intelligence

    “The results are very significant in my opinion,” says François Chollet, co-founder of AI firm Ndea, who created the ARC-AGI test. Because such models need to be trained from scratch on each new problem, they are “relatively impractical”, but “I expect a lot more research to come out that will build on top of these results”, he adds.

    The sole author of the paper — Alexia Jolicoeur-Martineau, an AI researcher at Samsung’s Advanced Institute of Technology in Montreal, Canada — says that her model shows that the idea that only massive models that cost millions of dollars to train can succeed at hard tasks “is a trap”. She has made the model’s code openly available on Github for anyone to download and modify. “Currently, there is too much focus on exploiting LLMs rather than devising and expanding new lines of direction,” she wrote on her blog.

    Tiny model, big results

    Most reasoning models are built on top of LLMs, which predict the next word in a sequence by tapping into billions of learned internal connections, known as parameters. They excel by memorizing patterns from billions of documents, which can trip them up when they come to unpredictable logic puzzles.

    The TRM takes a different approach. Jolicoeur-Martineau was inspired by a technique known as the hierarchical reasoning model, developed by the AI firm Sapient Intelligence in Singapore. The hierarchical reasoning model improves its answer through multiple iterations and was published in a preprint in June2.

    The TRM uses a similar approach, but uses just 7 million parameters, compared with 27 million for the hierarchical model and billions or trillions for LLMs. For each puzzle type the algorithm learns, such as a sudoku, Jolicoeur-Martineau trained a brain-inspired architecture known as a neural network on around 1,000 examples, formatted as a string of numbers.

    How AI agents will change research: a scientist’s guide

    During training, the model guesses the solution and then compares it with the correct answer, before refining its guess and repeating the process. In this way, it learns strategies to improve its guesses. The model then takes a similar approach to solve unseen puzzles of the same type, successively refining its answer up to 16 times before generating a response.

    beats LLMs Logic Massive model Test tiny
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleFrance marks 10 years since Paris attacks upended country | Paris Attacks
    Next Article A Vision for Gaza’s Future
    onlyplanz_80y6mt
    • Website

    Related Posts

    The missing pieces of menopause science

    March 7, 2026

    The age of animal experiments may be waning

    March 7, 2026

    Hey ChatGPT, write me a fictional paper: these LLMs are willing to commit academic fraud

    March 7, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Watch Lady Gaga’s Perform ‘Vanish Into You’ on ‘Colbert’

    September 9, 20251 Views

    Advertisers flock to Fox seeking an ‘audience of one’ — Donald Trump

    July 13, 20251 Views

    A Setback for Maine’s Free Community College Program

    June 19, 20251 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    At Chile’s Vera Rubin Observatory, Earth’s Largest Camera Surveys the Sky

    By onlyplanz_80y6mtJune 19, 2025

    SpaceX Starship Explodes Before Test Fire

    By onlyplanz_80y6mtJune 19, 2025

    How the L.A. Port got hit by Trump’s Tariffs

    By onlyplanz_80y6mtJune 19, 2025

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Watch Lady Gaga’s Perform ‘Vanish Into You’ on ‘Colbert’

    September 9, 20251 Views

    Advertisers flock to Fox seeking an ‘audience of one’ — Donald Trump

    July 13, 20251 Views

    A Setback for Maine’s Free Community College Program

    June 19, 20251 Views
    Our Picks

    The surprising science behind why daylight saving time is good for wildlife

    The missing pieces of menopause science

    53 Medical Schools Pledge to Beef Up Nutrition Education

    Recent Posts
    • The surprising science behind why daylight saving time is good for wildlife
    • The missing pieces of menopause science
    • 53 Medical Schools Pledge to Beef Up Nutrition Education
    • The age of animal experiments may be waning
    • US agency did not perform safety checks of more than 100 food ingredients, analysis finds | US news
    © 2026 naijaglobalnews. Designed by Pro.
    • About Us
    • Disclaimer
    • Get In Touch
    • Privacy Policy
    • Terms and Conditions

    Type above and press Enter to search. Press Esc to cancel.