Analyzing LLM Reasoning to Uncover Mental Health Stigma

arXiv cs.CL / 4/29/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The study finds that LLMs used for mental-health related applications can exhibit stigma toward people with psychological conditions.
  • It argues that traditional evaluations using multiple-choice questions miss bias that is embedded in the models’ intermediate reasoning and internal rationales.
  • Using clinical expertise, the authors build a taxonomy of stigmatizing language patterns and apply it to tag problematic statements within LLM reasoning, including severity ratings that distinguish overt prejudice from subtler biases.
  • The paper expands an existing mental-health stigma benchmark by adding more psychological conditions to capture a broader range of stigma-related patterns.
  • Results show that analyzing reasoning steps reveals substantially more stigma than MCQ-based methods and helps pinpoint logical flaws and misunderstandings about mental health conditions.

Abstract

While large language models (LLMs) are increasingly being explored for mental health applications, recent studies reveal that they can exhibit stigma toward individuals with psychological conditions. Existing evaluations of this stigma primarily rely on multiple-choice questions (MCQs), which fail to capture the biases embedded within the models' underlying logic. In this paper, we analyze the intermediate reasoning steps of LLMs to uncover hidden stigmatizing language and the internal rationales driving it. We leverage clinical expertise to categorize common patterns of stigmatizing language directed at individuals with psychological conditions and use this framework to identify and tag problematic statements in LLM reasoning. Furthermore, we rate the severity of these statements, distinguishing between overt prejudice and more subtle, less immediately harmful biases. To broaden the reasoning domain and capture a wider array of patterns, we also extend an existing mental health stigma benchmark by incorporating additional psychological conditions. Our findings demonstrate that evaluating model reasoning not only exposes substantially more stigma than traditional MCQ-based methods but it helps to identify the flaws in the LLMs' logic and their understanding of mental health conditions.