Extracting and Steering Emotion Representations in Small Language Models: A Methodological Comparison

arXiv cs.CL / 4/7/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents the first comparative study of how to extract and analyze internal emotion representations in small language models (100M–10B parameters), testing 9 models across five major architectural families.
  • It compares two emotion-vector extraction approaches (generation-based vs. comprehension-based) and finds generation-based methods yield significantly better emotion separation, with effects influenced by instruction tuning and model architecture.
  • Emotion representations are shown to localize primarily in middle transformer layers (around 50% depth), following a U-shaped depth pattern that appears invariant across parameter sizes (from 124M to 3B).
  • Causal steering experiments reveal distinct behavioral regimes—surgical, repetitive collapse, and explosive degradation—driven more by architecture than by scale, and are externally validated via an emotion classifier (92% success, 37/40 scenarios).
  • The authors report cross-lingual emotion entanglement in Qwen, where steering activates semantically aligned Chinese tokens despite RLHF suppression, highlighting potential safety concerns for multilingual deployments.

Abstract

Small language models (SLMs) in the 100M-10B parameter range increasingly power production systems, yet whether they possess the internal emotion representations recently discovered in frontier models remains unknown. We present the first comparative analysis of emotion vector extraction methods for SLMs, evaluating 9 models across 5 architectural families (GPT-2, Gemma, Qwen, Llama, Mistral) using 20 emotions and two extraction methods (generation-based and comprehension-based). Generation-based extraction produces statistically superior emotion separation (Mann-Whitney p = 0.007; Cohen's d = -107.5), with the advantage modulated by instruction tuning and architecture. Emotion representations localize at middle transformer layers (~50% depth), following a U-shaped curve that is architecture-invariant from 124M to 3B parameters. We validate these findings against representational anisotropy baselines across 4 models and confirm causal behavioral effects through steering experiments, independently verified by an external emotion classifier (92% success rate, 37/40 scenarios). Steering reveals three regimes -- surgical (coherent text transformation), repetitive collapse, and explosive (text degradation) -- quantified by perplexity ratios and separated by model architecture rather than scale. We document cross-lingual emotion entanglement in Qwen, where steering activates semantically aligned Chinese tokens that RLHF does not suppress, raising safety concerns for multilingual deployment. This work provides methodological guidelines for emotion research on open-weight models and contributes to the Model Medicine series by bridging external behavioral profiling with internal representational analysis.