AI Navigate

Residual Stream Analysis of Overfitting And Structural Disruptions

arXiv cs.LG / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The work shows that safety-focused fine-tuning with standard refusal templates yields a higher false-refusal rate on benign prompts, rising from 63% to 84% as safety data goes from 0% to 40%.
  • It finds that safety data exhibits substantially lower token entropy and 2-gram diversity (0.048) compared with general instruction data.
  • It introduces FlowLens, a stable PCA-based tool for residual-stream geometry analysis to reveal that safety data concentrates variance along a few components, reducing representational smoothness.
  • It proposes Variance Concentration Loss (VCL), a regularizer that penalizes excessive variance concentration in mid-layer residuals, reducing false refusals by over 35 percentage points while maintaining or improving performance on benchmarks like MMLU and GSM8K.

Abstract

Ensuring that large language models (LLMs) remain both helpful and harmless poses a significant challenge: fine-tuning on repetitive safety datasets, where unsafe prompts are paired with standard refusal templates, often leads to false refusals, in which benign queries are declined. We first quantify this effect, showing that safety data exhibits substantially lower token entropy and 2-gram diversity (0.048) compared to general instruction data. To uncover the root cause, we introduce FlowLens, a stable PCA-based tool for residual-stream geometry analysis, and reveal that higher proportions of safety examples concentrate variance along a few components, reducing representational smoothness and driving false refusals (false refusal rate rises from 63 percent to 84 percent as safety data increases from 0 percent to 40 percent). Guided by these insights, we propose Variance Concentration Loss (VCL), an auxiliary regularizer that penalizes excessive variance concentration in mid-layer residuals. Empirical results demonstrate that VCL reduces false refusals by over 35 percentage points while maintaining or improving performance on general benchmarks such as MMLU and GSM8K.