{"id":45451,"date":"2026-02-27T13:57:14","date_gmt":"2026-02-27T13:57:14","guid":{"rendered":"https:\/\/naijaglobalnews.org\/?p=45451"},"modified":"2026-02-27T13:57:14","modified_gmt":"2026-02-27T13:57:14","slug":"how-labos-ai-powered-smart-goggles-could-reduce-human-error-in-science","status":"publish","type":"post","link":"https:\/\/naijaglobalnews.org\/?p=45451","title":{"rendered":"How LabOS AI-powered smart goggles could reduce human error in science"},"content":{"rendered":"<p>\n<\/p>\n<p class=\"article_pub_date-zPFpJ\">February 27, 2026<\/p>\n<p class=\"article_read_time-ZYXEi\">4 min read<\/p>\n<p> <span class=\"google_cta_text-ykyUj\"><span class=\"google_cta_text_desktop-wtvUj\">Add Us On Google<\/span><span class=\"google_cta_text_mobile-jmni9\">Add SciAm<\/span><\/span><span class=\"google_cta_icon-pdHW3\"\/><\/p>\n<p>AI-powered smart goggles are helping novice scientists perform like experts<\/p>\n<p>A new wearable AI system watches your hands through smart glasses, guiding experiments and stopping mistakes before they happen<\/p>\n<p class=\"article_authors-ZdsD4\">By Deni Ellis B\u00e9chard <span class=\"article_editors__links-aMTdN\">edited by Eric Sullivan<\/span><\/p>\n<p>A view of a lab bench as seen through LabOS goggles.<\/p>\n<p>Cong Group, Stanford University<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Imagine standing at the laboratory bench, working on an experiment, when, as you finish one step, a display on the inside of your lab goggles tells you what to do next. A small camera in the frame watches your hands closely. If you reach for the wrong tube, the display flashes a warning. Before you can make the mistake, the system tells you how to get back on track.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Laboratory safety goggles have finally joined the ranks of smart devices. That\u2019s the promise behind LabOS, an AI \u201coperating system\u201d for scientific laboratories built by the Stanford-Princeton AI Coscientist Team, a group led by Stanford University bioengineer Le Cong and Princeton University computer scientist Mengdi Wang, with founding partners that include NVIDIA. Powered by NVIDIA\u2019s vision-language models to process visual data, the system is designed to provide AI with real-time knowledge of lab work so it can determine what causes experiments to fail or succeed and rapidly train new scientists to expert levels by guiding them through experimental protocols.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Walk into a wet lab, Cong says, and \u201cit hasn\u2019t changed much in the last 50 years.\u201d This matters, he explains, because a large portion of the time, science is done \u201cin the physical lab, in the physical world, not on computers.\u201d As described in a recent preprint paper, LabOS aims to bridge this physical-digital divide.<\/p>\n<h2>On supporting science journalism<\/h2>\n<p>If you&#8217;re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">The scientific community has long grappled with a problem that has been known for more than a decade as a \u201creplication crisis.\u201d In a 2016 Nature survey, Monya Baker, then an editor for the journal, reported that \u201cmore than 70% of researchers have tried and failed to reproduce another scientist\u2019s experiments,\u201d and more than half couldn\u2019t reproduce their own work. Some of that failure rate is attributable to statistical malpractice or publication pressure. But one common cause receives less attention: humans doing repetitive lab work make mistakes. A reagent added at the wrong temperature, a step skipped under time pressure, a contaminated pipette tip\u2014these are errors that can be too small to notice but are large enough to wreck an experiment.<\/p>\n<p>A researcher using the LabOS goggles next to a robotic arm.<\/p>\n<p>Cong Group, Stanford University<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">The solution proposed by Wang and Cong\u2019s team is an open-source platform and hardware kit that lets AI see what scientists see. Researchers in early pilot tests in Cong\u2019s lab at Stanford and Wang\u2019s at Princeton wear augmented reality\/extended reality (AR\/XR) glasses that stream video directly to the system. LabOS compares what it sees against the written protocol, offering guidance to the wearer while also gathering training data. The AI can talk the scientist through each step, reminding them to keep a surface sterile or flagging lapses in technique.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">AI needs real-time knowledge of experiments to learn what works and what doesn\u2019t, much in the same way that robots and self-driving cars have to gather real-world data to update their systems. \u201cWe can have 1,000 chatbots, 1,000 AI scientists trying to tell real scientists what to do,\u201d Wang says, but if AI isn\u2019t wired into the physical experiment, \u201cwe never have anything verifiable.\u201d<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Normally when humans do lab work, learning can be slow. If an experiment fails, they try to determine what went wrong and begin again. But when AI watches an experiment and sees the outcome, it may be able to more rapidly determine which steps caused problems and can design a new experiment. By recording entire experiments, an AI can study the smallest details to determine what caused them to fail.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">This oversight extends beyond human guidance; LabOS also utilizes a robotic arm to handle tedious tasks such as mixing. \u201cIt\u2019s not like replacing people,\u201d Cong says. \u201cWe need to help people.\u201d<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">So far, the assistance is yielding results. In an experimental procedure that involved increasing the amount of a certain protein in cells, junior scientists with just one week of LabOS training obtained results that were virtually indistinguishable from those of expert scientists. \u201cI couldn\u2019t tell the difference as a professor,\u201d Cong says. \u201cThe results from the experiment\u2014they\u2019re identical.\u201d<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">\u201cFrom a robotics and human-computer interaction perspective, this work highlights a promising direction,\u201d says Kourosh Darvish, a scientist at the AI and Automation Lab at the University of Toronto\u2019s Acceleration Consortium, who was not involved in LabOS development. Yet he notes the importance of developing standards to better evaluate such work. \u201cAs AI systems increasingly move from analytical tools toward active partners in experimentation, community-level standardization and validation will be critical.\u201d<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">The AI Coscientist Team is already pushing this technology beyond the research bench. Recently the researchers introduced MedOS, adapting their AI-and-AR architecture to assist surgeons with anatomical mapping and tool alignment. Ultimately, Wang says, the broader ambition is to turn \u201cevery scientific research lab\u201d\u2014and soon, every clinic\u2014\u201cinto an AI-perceivable and AI-operable environment,\u201d creating a system that can train professionals faster, catch mistakes and improve human outcomes.<\/p>\n<h2 class=\"subscriptionPleaHeading-DMY4w\">It\u2019s Time to Stand Up for Science<\/h2>\n<p class=\"subscriptionPleaText--StZo\">If you enjoyed this article, I\u2019d like to ask for your support. <span class=\"subscriptionPleaItalicFont-i0VVV\">Scientific American<\/span> has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.<\/p>\n<p class=\"subscriptionPleaText--StZo\">I\u2019ve been a <span class=\"subscriptionPleaItalicFont-i0VVV\">Scientific American<\/span> subscriber since I was 12 years old, and it helped shape the way I look at the world. <span class=\"subscriptionPleaItalicFont-i0VVV\">SciAm <\/span>always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.<\/p>\n<p class=\"subscriptionPleaText--StZo\">If you subscribe to <span class=\"subscriptionPleaItalicFont-i0VVV\">Scientific American<\/span>, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.<\/p>\n<p class=\"subscriptionPleaText--StZo\">In return, you get essential news, captivating podcasts, brilliant infographics, can&#8217;t-miss newsletters, must-watch videos, challenging games, and the science world&#8217;s best writing and reporting. You can even gift someone a subscription.<\/p>\n<p class=\"subscriptionPleaText--StZo\">There has never been a more important time for us to stand up and show why science matters. I hope you\u2019ll support us in that mission.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>February 27, 2026 4 min read Add Us On GoogleAdd SciAm AI-powered smart goggles are helping novice scientists perform like experts A new wearable AI system watches your hands through smart glasses, guiding experiments and stopping mistakes before they happen By Deni Ellis B\u00e9chard edited by Eric Sullivan A view of a lab bench as<\/p>\n","protected":false},"author":1,"featured_media":45452,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[50],"tags":[2084,7282,23405,761,23404,1533,516,646],"class_list":{"0":"post-45451","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-environment","8":"tag-aipowered","9":"tag-error","10":"tag-goggles","11":"tag-human","12":"tag-labos","13":"tag-reduce","14":"tag-science","15":"tag-smart"},"_links":{"self":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/posts\/45451","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=45451"}],"version-history":[{"count":0,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/posts\/45451\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/media\/45452"}],"wp:attachment":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=45451"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=45451"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=45451"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}