{"id":28271,"date":"2025-10-15T18:32:56","date_gmt":"2025-10-15T18:32:56","guid":{"rendered":"https:\/\/naijaglobalnews.org\/?p=28271"},"modified":"2025-10-15T18:32:56","modified_gmt":"2025-10-15T18:32:56","slug":"ai-is-trained-to-avoid-these-3-words-that-are-essential-to-learning-opinion","status":"publish","type":"post","link":"https:\/\/naijaglobalnews.org\/?p=28271","title":{"rendered":"AI Is Trained to Avoid These 3 Words That Are Essential to Learning (Opinion)"},"content":{"rendered":"<p>\n<\/p>\n<p>AI chatbots are like students who don\u2019t do the reading and raise their hand anyway.<\/p>\n<p>A new paper from researchers at OpenAI, the company behind ChatGPT, finds that a major reason AI hallucinates is because large language models are engineered to eliminate uncertainty. An AI chatbot that says \u201cI don\u2019t know\u201d gets the same training score as one that offers incorrect information. As a result, AI provides confident-sounding answers that are frequently wrong. Many people are swayed by AI\u2019s display of certainty, conflating the presentation of information with its quality.<\/p>\n<p>Education should teach students to grapple with complexity. AI is designed to avoid it. This mismatch is yet another reason to slow roll the rush to put AI in students\u2019 hands. It also points to a question teachers should ask themselves when evaluating how to integrate AI into the classroom: Can I use this tool in a way that models the thinking I want to teach my students?<\/p>\n<p>For more than two decades, the Digital Inquiry Group and its earlier iteration at Stanford University have created curriculum that teaches students to read and think like historians, centered on document-based inquiry. After a 2016 study showed that young people struggle to evaluate online sources, our research group developed curriculum to help students separate fact from fiction on the internet.<\/p>\n<p>What historical thinking and online reasoning share is the imperative to look beyond the surface of information and instead seek a broad context before diving in. Information doesn\u2019t come out of nowhere: It\u2019s authored by someone, somewhere, for some purpose. These considerations are essential in deciding what to trust.<\/p>\n<p>Historians approach a document by sourcing it, glancing briefly at its contents before darting to the bottom to ponder its date, author, and relationship to the events it describes. These crucial details frame historians\u2019 subsequent reading.<\/p>\n<p>Similarly, when we studied how professional fact-checkers at the nation\u2019s leading news outlets approach an unfamiliar website, we noticed that they almost immediately opened new tabs and read laterally to gain context. To investigate an unfamiliar digital text, a savvy reader, paradoxically, first needs to leave it.<\/p>\n<p>The approach of both historians and fact-checkers differs from how students interact with historical texts, social media posts, and more recently, AI chatbot responses.<\/p>\n<p>Many students see historical documents as vessels of information, not altogether different from their textbook, and are oblivious to authorship and inattentive to historical context. Likewise, students and adults alike are often swayed by the appearance of a video on social media or the authoritative tone of a website\u2019s About page. Our most recent data at the Digital Inquiry Group suggest the same pattern may be emerging when it comes to AI\u2014and that the chatbot\u2019s confident tone is a likely culprit.<\/p>\n<p>In a pilot study where students used AI to search the internet, we asked them to evaluate an answer from ChatGPT that failed to cite sources. \u201cIt touches on everything you could think of to ask,\u201d said one student. \u201cIt gave a detailed response,\u201d wrote another. We don\u2019t want students to approach any source as the sole arbiter of truth, be it a traditional textbook or a shiny new chatbot. We never want them to rely on fabricated citations when they can\u2019t find a source to support their argument\u2014something that even professionals have been caught doing.<\/p>\n<p>What we want is for students to weigh evidence. To recognize the challenging, fascinating, and rewarding process of piecing together a coherent account from multiple sources. To learn that admitting what we don\u2019t know is its own achievement, its own unique form of knowledge.<\/p>\n<p>And this is what concerns us about the findings of OpenAI\u2019s researchers. Chatbots, the researchers admit, are programmed to provide authoritative responses to complex, thorny, and often unanswerable questions. The companies designing chatbots disincentivize the very expression of uncertainty that is so crucial in the classroom.<\/p>\n<p>We shouldn\u2019t hold our breath waiting for AI companies to fix their models. The good news is that AI is malleable enough, and individual users capable enough, that educators can take immediate action to apply lessons we already have about good thinking, good research, and good education.<\/p>\n<p>Take, for example, an AI response that doesn\u2019t cite sources. Even a few hours of instruction can get students to pay more attention to where information comes from. Our research group is now experimenting with teaching students how to prompt a chatbot to cite its sources. Our goal is to nudge students to be skeptical of AI answers pulled from Reddit threads and random blog posts and instead direct the model to sources that reflect subject-matter expertise. Students need to see their interaction with a chatbot as a process, much like knowledge creation, rather than a one-and-done exchange.<\/p>\n<p>Despite its flaws, AI can serve as a powerful contextualization portal. But only if the people using it recognize how fallible it can be, how much we still don\u2019t know about how it works, and learn how to prompt it so that it produces quality responses.<\/p>\n<p>Information expert Mike Caulfield, for example, has illustrated how asking a chatbot to weigh the evidence for and against a claim can produce significantly better responses than just asking for a simple answer. With a more specific prompt, chatbots will often include qualifications about expert disagreement or lack of scholarly consensus.<\/p>\n<p>Good educators don\u2019t punish their students for uncertainty. And that means good educators should be cautious about placing a technology in students\u2019 hands that\u2019s trained to avoid saying \u201cI don\u2019t know.\u201d<\/p>\n<p>Our research group has long advocated digital literacy instruction and criticized approaches that tell students to stay away from search engines and shelter in the safety of peer-reviewed databases. But there is a difference between teaching students how to drive safely and throwing them in an F1 sports car before they have a license.<br \/>Too much of AI instruction right now looks like the latter. When it comes to AI in schools, all of us need a dose of humility that AI, at least for now, clearly lacks.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>AI chatbots are like students who don\u2019t do the reading and raise their hand anyway. A new paper from researchers at OpenAI, the company behind ChatGPT, finds that a major reason AI hallucinates is because large language models are engineered to eliminate uncertainty. An AI chatbot that says \u201cI don\u2019t know\u201d gets the same training<\/p>\n","protected":false},"author":1,"featured_media":28272,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[57],"tags":[2718,9660,585,440,16812,3787],"class_list":{"0":"post-28271","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-education","8":"tag-avoid","9":"tag-essential","10":"tag-learning","11":"tag-opinion","12":"tag-trained","13":"tag-words"},"_links":{"self":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/posts\/28271","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=28271"}],"version-history":[{"count":0,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/posts\/28271\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/media\/28272"}],"wp:attachment":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=28271"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=28271"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=28271"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}