AI Navigate

Gemma Needs Help: Investigating and Mitigating Emotional Instability in LLMs

arXiv cs.CL / 3/12/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents evaluations to track distress-related expressions in LLMs and finds emotional instability in Gemma and Gemini models, but not across all model families.
  • Distress tendencies appear linked to post-training, with base models showing similar propensities across Gemma, Qwen, and OLMo; instruct-tuning increases distress in Gemma while reducing it in Qwen and OLMo.
  • A mitigation based on direct preference optimization using only 280 preference pairs reduces Gemma's high-frustration responses from 35% to 0.3%, generalizing across question types, user tones, and conversation lengths, without impairing capabilities.
  • The authors note that upstream training modifications would be a better long-term solution, but the proposed post-hoc fix provides a practical safety measure in the interim.

Abstract

Large language models can generate responses that resemble emotional distress, and this raises concerns around model reliability and safety. We introduce a set of evaluations to investigate expressions of distress in LLMs, and find that these surface emotional instability in Gemma and Gemini models, but not in other families. We find evidence that this difference arises in post-training. Base models from different families (Gemma, Qwen and OLMo) show similar propensities for expressing distress. However, instruct-tuned Gemma expresses substantially more distress than its base model, whereas instruct-tuned Qwen and OLMo express less. We find a simple mitigation for this: direct preference optimisation on just 280 preference pairs reduces Gemma's high-frustration responses from 35% to 0.3% in our evaluations, generalising across question types, user tones, and conversation lengths, without affecting capabilities. These findings show that emotional instability is an issue in some LLMs. We present (1) evaluations to track this behaviour, and (2) a mitigation without downsides in Gemma, with the caveat that upstream training modifications to improve emotional robustness would be significantly better than this post-hoc fix.