🚀 Trusted by 5,000+ Advertisers & Premium Publishers

Mind initiates investigation into artificial intelligence and mental health following Guardian report.

Mind is embarking on a groundbreaking inquiry into the intersection of artificial intelligence (AI) and mental health. This initiative follows a recent investigation by The Guardian, which unveiled dangerous medical advice provided by Google’s AI Overviews. The summary generated by Google’s AI has the potential to significantly impact the health and safety of millions, raising serious concerns about the quality of information distributed online.

This year-long inquiry is the first of its kind globally, spearheaded by the mental health charity operating in England and Wales. Mind will actively examine the potential risks and necessary safeguards as AI continues to play an increasingly influential role in the mental health landscape, affecting countless individuals around the globe.

The inquiry will convene leading medical practitioners and mental health specialists, alongside individuals with lived experiences, healthcare providers, policymakers, and technology corporations. Mind aims to create a safer digital environment for mental health, advocating for robust regulations, standards, and protective measures.


The inquiry’s launch coincides with alarming revelations that people were being misled by inaccurate health advice through Google’s AI Overviews. These AI-generated summaries reach up to 2 billion users monthly and rank above traditional search results, demonstrating their immense visibility on the internet.

In response to the Guardian’s exposé, Google took action to remove AI Overviews from certain medical searches. However, Dr. Sarah Hughes, Mind’s CEO, highlighted that “dangerously incorrect” mental health guidance continues to be disseminated. In extreme cases, such misinformation could lead to severe consequences for vulnerable individuals.

Dr. Hughes underscores the potential of AI to enhance the lives of those facing mental health challenges, advocating for wider access to support and strengthening public services. Yet, realizing this potential hinges on the responsible development and deployment of AI, complemented by adequate safeguards proportional to the inherent risks.

She further commented, “The concerns highlighted by The Guardian’s investigation are pivotal to our commission on AI and mental health, as we explore the risks, opportunities, and protections necessary as AI becomes more integrated into daily life.” Her emphasis on ensuring that innovation does not compromise individual well-being reflects Mind’s commitment to prioritizing those with lived experiences of mental health issues in shaping future digital support.

Google maintains that its AI Overviews, utilizing generative AI technology, provide valuable insights and information. The company describes these summaries as both “helpful” and “reliable” in their current form.

However, the Guardian investigation found troubling instances of misleading health information in AI Overviews, impacting a myriad of topics, from cancer to liver disease, and notably, mental health conditions. Experts have voiced concerns that certain AI Overviews addressing psychosis and eating disorders share “very dangerous advice,” labeling it as “incorrect and harmful,” which might prevent individuals from seeking necessary help.

Moreover, the investigation indicated that Google is downplaying the safety warnings associated with its AI-generated medical assistance, potentially leaving users at risk. Dr. Hughes reiterated that vulnerable individuals are being subjected to dangerously erroneous mental health guidance, including advice that could deter them from pursuing treatment, exacerbate stigma or discrimination, and, in severe cases, endanger lives.

“People deserve accurate, evidence-based information, not untested technology presented confidently,” Dr. Hughes added, reinforcing the imperative for high standards in the realm of mental health support.