Detecting HIV-Related Stigma in Clinical Narratives Using Large Language Models

arXiv cs.CL / 4/10/2026

📰 NewsSignals & Early TrendsTools & Practical UsageModels & Research

Key Points

  • The study presents an LLM-based approach to detect HIV-related stigma in clinical narratives, addressing the lack of ready-to-use tools for extracting stigma information from clinical notes.
  • It uses UF Health clinical notes (2012–2022) and builds a labeled dataset of 1,332 annotated sentences across four stigma subscales: concern with public attitudes, disclosure concerns, negative self-image, and personalized stigma.
  • Encoder and generative LLMs are benchmarked using zero-shot and few-shot prompting, with GatorTron-large achieving the best overall performance (Micro-F1 = 0.62).
  • Few-shot prompting significantly boosts generative models, where 5-shot GPT-OSS-20B (Micro-F1 = 0.57) and LLaMA-8B (Micro-F1 = 0.59) perform competitively, but zero-shot generative inference shows notable failure rates (up to 32%).
  • Predictive performance varies by subscale, with negative self-image easiest to detect and personalized stigma remaining the hardest, highlighting areas for future model refinement.

Abstract

Human immunodeficiency virus (HIV)-related stigma is a critical psychosocial determinant of health for people living with HIV (PLWH), influencing mental health, engagement in care, and treatment outcomes. Although stigma-related experiences are documented in clinical narratives, there is a lack of off-the-shelf tools to extract and categorize them. This study aims to develop a large language model (LLM)-based tool for identifying HIV stigma from clinical notes. We identified clinical notes from PLWH receiving care at the University of Florida (UF) Health between 2012 and 2022. Candidate sentences were identified using expert-curated stigma-related keywords and iteratively expanded via clinical word embeddings. A total of 1,332 sentences were manually annotated across four stigma subscales: Concern with Public Attitudes, Disclosure Concerns, Negative Self-Image, and Personalized Stigma. We compared GatorTron-large and BERT as encoder-based baselines, and GPT-OSS-20B, LLaMA-8B, and MedGemma-27B as generative LLMs, under zero-shot and few-shot prompting. GatorTron-large achieved the best overall performance (Micro F1 = 0.62). Few-shot prompting substantially improved generative model performance, with 5-shot GPT-OSS-20B and LLaMA-8B achieving Micro-F1 scores of 0.57 and 0.59, respectively. Performance varied by stigma subscale, with Negative Self-Image showing the highest predictability and Personalized Stigma remaining the most challenging. Zero-shot generative inference exhibited non-trivial failure rates (up to 32%). This study develops the first practical NLP tool for identifying HIV stigma in clinical notes.