{"id":44811,"date":"2026-02-20T14:09:02","date_gmt":"2026-02-20T14:09:02","guid":{"rendered":"https:\/\/naijaglobalnews.org\/?p=44811"},"modified":"2026-02-20T14:09:02","modified_gmt":"2026-02-20T14:09:02","slug":"mind-launches-inquiry-into-ai-and-mental-health-after-guardian-investigation-ai-artificial-intelligence","status":"publish","type":"post","link":"https:\/\/naijaglobalnews.org\/?p=44811","title":{"rendered":"Mind launches inquiry into AI and mental health after Guardian investigation | AI (artificial intelligence)"},"content":{"rendered":"<p>\n<\/p>\n<p class=\"dcr-130mj7b\">Mind is launching a significant inquiry into artificial intelligence and mental health after a Guardian investigation exposed how Google\u2019s AI Overviews gave people \u201cvery dangerous\u201d medical advice.<\/p>\n<p class=\"dcr-130mj7b\">In a year-long commission, the mental health charity, which operates in England and Wales, will examine the risks and safeguards required as AI increasingly influences the lives of millions of people affected by mental health issues worldwide.<\/p>\n<p class=\"dcr-130mj7b\">The inquiry \u2013 the first of its kind globally \u2013 will bring together the world\u2019s leading doctors and mental health professionals, as well as people with lived experience, health providers, policymakers and tech companies. Mind says it will aim to shape a safer digital mental health ecosystem, with strong regulation, standards and safeguards.<\/p>\n<p class=\"dcr-130mj7b\">The launch comes after the Guardian revealed how people were being put at risk of harm by false and misleading health information in Google AI Overviews. The AI-generated summaries are shown to 2 billion people a month, and appear above traditional search results on the world\u2019s most visited website.<\/p>\n<p class=\"dcr-130mj7b\">After the reporting, Google removed AI Overviews for some but not all medical searches. Dr Sarah Hughes, chief executive officer of Mind, said \u201cdangerously incorrect\u201d mental health advice was still being provided to the public. In the worst cases, the bogus information could put lives at risk, she said.<\/p>\n<p class=\"dcr-130mj7b\">Hughes said: \u201cWe believe AI has enormous potential to improve the lives of people with mental health problems, widen access to support, and strengthen public services. But that potential will only be realised if it is developed and deployed responsibly, with safeguards proportionate to the risks.<\/p>\n<p class=\"dcr-130mj7b\">\u201cThe issues exposed by the Guardian\u2019s reporting are among the reasons we\u2019re launching Mind\u2019s commission on AI and mental health, to examine the risks, opportunities and safeguards needed as AI becomes more deeply embedded in everyday life.<\/p>\n<p class=\"dcr-130mj7b\">\u201cWe want to ensure that innovation does not come at the expense of people\u2019s wellbeing, and that those of us with lived experience of mental health problems are at the heart of shaping the future of digital support.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Google has said its AI Overviews, which use generative AI to provide snapshots of essential information about a topic or question, are \u201chelpful\u201d and \u201creliable\u201d.<\/p>\n<p class=\"dcr-130mj7b\">But the Guardian found some AI Overviews served up inaccurate health information and put people at risk of harm. The investigation uncovered false and misleading medical advice across a range of issues, including cancer, liver disease and women\u2019s health, as well as mental health conditions.<\/p>\n<p class=\"dcr-130mj7b\">Experts said some AI Overviews for conditions such as psychosis and eating disorders offered \u201cvery dangerous advice\u201d and were \u201cincorrect, harmful or could lead people to avoid seeking help\u201d.<\/p>\n<p class=\"dcr-130mj7b\">Google is also downplaying safety warnings that its AI-generated medical advice may be wrong, the Guardian found.<\/p>\n<p class=\"dcr-130mj7b\">Hughes said vulnerable people were being served \u201cdangerously incorrect guidance on mental health\u201d, including \u201cadvice that could prevent people from seeking treatment, reinforce stigma or discrimination and in the worst cases, put lives at risk\u201d.<\/p>\n<p class=\"dcr-130mj7b\">She added: \u201cPeople deserve information that is safe, accurate and grounded in evidence, not untested technology presented with a veneer of confidence.\u201d<\/p>\n<p><span class=\"dcr-1ypwo6h\">Quick Guide<\/span><\/p>\n<h4 class=\"dcr-1fa5dcn\">Contact Andrew Gregory about this story<\/h4>\n<p><span class=\"dcr-55zfp0\"><span class=\"dcr-3j53am\"><span class=\"dcr-41evle\"><\/span>Show<\/span><\/span><\/p>\n<p><strong>If you have something to share about  this story, you can contact Andrew using one of the following methods.<\/strong><\/p>\n<p>The Guardian app has a tool to send tips about stories. Messages are end to end encrypted and concealed within the routine activity that every Guardian mobile app performs. This prevents an observer from knowing that you are communicating with us at all, let alone what is being said.<\/p>\n<p>If you don\u2019t already have the Guardian app, download it (iOS\/Android) and go to the menu. Select \u2018Secure Messaging\u2019.<\/p>\n<p><strong>Email (not secure)<\/strong><\/p>\n<p>If you don\u2019t need a high level of security or confidentiality you can email\u00a0andrew.gregory@theguardian.com<\/p>\n<p><strong>SecureDrop and other secure methods<\/strong><\/p>\n<p>If you can safely use the tor network without being observed or monitored you can send messages and documents to the Guardian via our SecureDrop platform.<\/p>\n<p>Finally, our guide at theguardian.com\/tips\u00a0lists several ways to contact us securely, and discusses the pros and cons of each.\u00a0<\/p>\n<p>Illustration: Guardian Design \/ Rich Cousins<\/p>\n<p>Thank you for your feedback.<\/p>\n<p class=\"dcr-130mj7b\">The commission, which will run for a year, will gather evidence on the intersection of AI and mental health, and provide an \u201copen space\u201d where the experience of people with mental health conditions will be \u201cseen, recorded and understood\u201d.<\/p>\n<p class=\"dcr-130mj7b\">Rosie Weatherley, information content manager at Mind, said that although Googling mental health information \u201cwasn\u2019t perfect\u201d before AI Overviews, it usually worked well. She said: \u201cUsers had a good chance of clicking through to a credible health website that answered their query, and then went further \u2013 offering nuance, lived experience, case studies, quotes, social context and an onward journey to support.<\/p>\n<p class=\"dcr-130mj7b\">\u201cAI Overviews replaced that richness with a clinical-sounding summary that gives an illusion of definitiveness. They give the user more of one form of clarity (brevity and plain English), while giving them less of another form of clarity (security in the source of the information, and how much to trust it). It\u2019s a very seductive swap, but not a responsible one.\u201d<\/p>\n<p class=\"dcr-130mj7b\">A Google spokesperson said: \u201cWe invest significantly in the quality of AI Overviews, particularly for topics like health, and the vast majority provide accurate information.<\/p>\n<p class=\"dcr-130mj7b\">\u201cFor queries where our systems identify a person might be in distress, we work to display relevant, local crisis hotlines. Without being able to review the examples referenced, we can\u2019t comment on their accuracy.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Mind is launching a significant inquiry into artificial intelligence and mental health after a Guardian investigation exposed how Google\u2019s AI Overviews gave people \u201cvery dangerous\u201d medical advice. In a year-long commission, the mental health charity, which operates in England and Wales, will examine the risks and safeguards required as AI increasingly influences the lives of<\/p>\n","protected":false},"author":1,"featured_media":44812,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[51],"tags":[1564,1510,37,357,1443,1037,1372,1031,3059],"class_list":{"0":"post-44811","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-health","8":"tag-artificial","9":"tag-guardian","10":"tag-health","11":"tag-inquiry","12":"tag-intelligence","13":"tag-investigation","14":"tag-launches","15":"tag-mental","16":"tag-mind"},"_links":{"self":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/posts\/44811","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=44811"}],"version-history":[{"count":0,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/posts\/44811\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=\/wp\/v2\/media\/44812"}],"wp:attachment":[{"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=44811"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=44811"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/naijaglobalnews.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=44811"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}