{"id":32458,"date":"2025-11-06T22:01:56","date_gmt":"2025-11-06T22:01:56","guid":{"rendered":"https:\/\/naijaglobalnews.org\/?p=32458"},"modified":"2025-11-06T22:01:56","modified_gmt":"2025-11-06T22:01:56","slug":"ai-decodes-visual-brain-activity-and-writes-captions-for-it","status":"publish","type":"post","link":"https:\/\/naijaglobalnews.org\/?p=32458","title":{"rendered":"AI Decodes Visual Brain Activity\u2014and Writes Captions for It"},"content":{"rendered":"<p>\n<\/p>\n<p class=\"article_pub_date-zPFpJ\">November 6, 2025<\/p>\n<p class=\"article_read_time-ZYXEi\">3 min read<\/p>\n<p>AI Decodes Visual Brain Activity\u2014and Writes Captions for It<\/p>\n<p>A non-invasive imaging technique can translate scenes in your head into sentences. It could help to reveal how the brain interprets the world<\/p>\n<p class=\"article_authors-ZdsD4\">By Max Kozlov &amp; Nature magazine <\/p>\n<p>Functional magnetic resonance imaging is a non-invasive way to explore brain activity.<\/p>\n<p>PBH Images\/Alamy Stock Photo<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Reading a person\u2019s mind using a recording of their brain activity sounds futuristic, but it\u2019s now one step closer to reality. A new technique called \u2018mind captioning\u2019 generates descriptive sentences of what a person is seeing or picturing in their mind using a read-out of their brain activity, with impressive accuracy.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">The technique, described in a paper published today in Science Advances, also offers clues for how the brain represents the world before thoughts are put into words. And it might be able to help people with language difficulties, such as those caused by strokes, to better communicate.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">The model predicts what a person is looking at \u201cwith a lot of detail\u201d, says Alex Huth, a computational neuroscientist at the University of California, Berkeley. \u201cThis is hard to do. It\u2019s surprising you can get that much detail.\u201d<\/p>\n<h2>On supporting science journalism<\/h2>\n<p>If you&#8217;re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.<\/p>\n<h2 id=\"scan-and-predict\" class=\"\" data-block=\"sciam\/heading\">Scan and predict<\/h2>\n<p class=\"\" data-block=\"sciam\/paragraph\">Researchers have been able to accurately predict what a person is seeing or hearing using their brain activity for more than a decade. But decoding the brain&#8217;s interpretation of complex content, such as short videos or abstract shapes, has proved to be more difficult.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Previous attempts have identified only key words that describe what a person saw rather than the complete context, which might include the subject of a video and actions that occur in it, says Tomoyasu Horikawa, a computational neuroscientist at NTT Communication Science Laboratories in Kanagawa, Japan. Other attempts have used artificial intelligence (AI) models that can create sentence structure themselves, making it difficult to know whether the description was actually represented in the brain, he adds.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Horikawa\u2019s method first used a deep-language AI model to analyse the text captions of more than 2,000 videos, turning each one into a unique numerical \u2018meaning signature\u2019. A separate AI tool was then trained on six participants\u2019 brain scans and learnt to find the brain-activity patterns that matched each meaning signature while the participants watched the videos.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Once trained, this brain decoder could read a new brain scan from a person watching a video and predict the meaning signature. Then, a different AI text generator would search for a sentence that comes closest to the meaning signature decoded from the individual\u2019s brain.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">For example, a participant watched a short video of a person jumping from the top of a waterfall. Using their brain activity, the AI model guessed strings of words, starting with \u201cspring flow\u201d, progressing to \u201cabove rapid falling water fall\u201d on the tenth guess and arriving at \u201ca person jumps over a deep water fall on a mountain ridge\u201d on the 100th guess.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">The researchers also asked participants to recall video clips that they had seen. The AI models successfully generated descriptions of these recollections, demonstrating that the brain seems to use a similar representation for both viewing and remembering.<\/p>\n<h2 id=\"reading-the-future\" class=\"\" data-block=\"sciam\/heading\">Reading the future<\/h2>\n<p class=\"\" data-block=\"sciam\/paragraph\">This technique, which uses non-invasive functional magnetic resonance imaging, could help to improve the process by which implanted brain\u2013computer interfaces might translate people\u2019s non-verbal mental representations directly into text. \u201cIf we can do that using these artificial systems, maybe we can help out these people with communication difficulties,\u201d says Huth, who developed a similar model in 2023 with his colleagues that decodes language from non-invasive brain recordings.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">These findings raise concerns about mental privacy, Huth says, as researchers grow closer to revealing intimate thoughts, emotions and health conditions that could, in theory, be used for surveillance, manipulation or to discriminate against people. Neither Huth\u2019s model nor Horikawa\u2019s crosses a line, they both say, because these techniques require participants\u2019 consent and the models cannot discern private thoughts. \u201cNobody has shown you can do that, yet,\u201d says Huth.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">This article is reproduced with permission and was first published on November 5, 2025.<\/p>\n<h2 class=\"subscriptionPleaHeading-DMY4w\">It\u2019s Time to Stand Up for Science<\/h2>\n<p class=\"subscriptionPleaText--StZo\">If you enjoyed this article, I\u2019d like to ask for your support. <span class=\"subscriptionPleaItalicFont-i0VVV\">Scientific American<\/span> has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.<\/p>\n<p class=\"subscriptionPleaText--StZo\">I\u2019ve been a <span class=\"subscriptionPleaItalicFont-i0VVV\">Scientific American<\/span> subscriber since I was 12 years old, and it helped shape the way I look at the world. <span class=\"subscriptionPleaItalicFont-i0VVV\">SciAm <\/span>always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.<\/p>\n<p class=\"subscriptionPleaText--StZo\">If you subscribe to <span class=\"subscriptionPleaItalicFont-i0VVV\">Scientific American<\/span>, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.<\/p>\n<p class=\"subscriptionPleaText--StZo\">In return, you get essential news, captivating podcasts, brilliant infographics, can&#8217;t-miss newsletters, must-watch videos, challenging games, and the science world&#8217;s best writing and reporting. You can even gift someone a subscription.<\/p>\n<p class=\"subscriptionPleaText--StZo\">There has never been a more important time for us to stand up and show why science matters. I hope you\u2019ll support us in that mission.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>November 6, 2025 3 min read AI Decodes Visual Brain Activity\u2014and Writes Captions for It A non-invasive imaging technique can translate scenes in your head into sentences. It could help to reveal how the brain interprets the world By Max Kozlov &amp; Nature magazine Functional magnetic resonance imaging is a non-invasive way to explore brain<\/p>\n","protected":false},"author":1,"featured_media":32459,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[50],"tags":[18737,2121,18738,18736,7035,12375],"class_list":{"0":"post-32458","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-environment","8":"tag-activityand","9":"tag-brain","10":"tag-captions","11":"tag-decodes","12":"tag-visual","13":"tag-writes"},"_links":{"self":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/posts\/32458","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=32458"}],"version-history":[{"count":0,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/posts\/32458\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/media\/32459"}],"wp:attachment":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=32458"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=32458"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=32458"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}