Whose Story Gets Told? Positionality and Bias in LLM Summaries of Life Narratives
arXiv cs.CL / 4/23/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper examines how using large language models (LLMs) for inductive thematic analysis—particularly abstractive interpretation of life narratives—can make ethical evaluation harder than straightforward accuracy checks.
- In collaboration with psychologists, the authors study how an LLM’s role as an “interpreter of meaning” can change study conclusions and perspectives.
- They propose a summarization-based pipeline designed to surface biases in the perspectives LLMs adopt when interpreting human life stories.
- The authors show the pipeline can detect race and gender bias, raising concerns about potential representational harm.
- They recommend using this bias analysis as a way to build a “positionality portrait” in future studies that rely on LLM-based interpretation of participants’ written or transcribed speech.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans
Dev.to

Why use an AI gateway at all?
Dev.to

OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Dev.to

GPT Image 2 Subject-Lock Editing: A Practical Guide to input_fidelity
Dev.to