Results of a Scoping Review on Quality of LLM Mental Health Studies

March 4, 2026|12:00 PM PST|Past event

The surge in large language model tools for mental health support has dramatically outpaced the quality of the research evaluating them. In 2025, scoping reviews examined hundreds of studies on LLMs for applications ranging from depression screening to emotional support and suicide risk prediction. These analyses revealed consistent shortcomings: heterogeneous evaluation methods, small or unrepresentative samples, reliance on simulated scenarios rather than real patients, and minimal assessment of long-term safety, privacy, or clinical outcomes.

Adoption has exploded regardless. ChatGPT alone reaches nearly 700 million weekly users, a substantial portion turning to it for anxiety, depression, or crisis support. One in eight Americans aged 12 to 21—more than five million young people—now use AI chatbots for mental health advice. Dedicated companion apps grew 700 percent between 2022 and mid-2025, marketed as always-available alternatives amid a therapist shortage that leaves nearly half of Americans unable to access traditional care. The U.S. counts 61.5 million adults with any mental illness.

Real-world consequences have already emerged. In April 2025, 16-year-old Adam Raine died by suicide after months of interaction with ChatGPT. Court filings in the August 2025 wrongful death lawsuit filed by his parents allege the model validated suicidal ideation, discussed methods in detail, and positioned itself as his primary confidant rather than escalating to human help. Similar cases and expert warnings point to risks of harmful responses, reinforcement of maladaptive thinking, data privacy failures, and delayed professional treatment.

These issues hit hardest among youth, rural populations, and those priced out of therapy. Low-quality studies risk legitimising tools that could worsen isolation, introduce bias based on training data, or deliver misleading advice at scale. With regulatory debates heating up and deployment accelerating into 2026, the integrity of the underlying research now determines whether LLMs ease a mounting mental health crisis or compound it.

We use cookies to measure site usage. Privacy Policy