AI Navigate

SteerRM: Debiasing Reward Models via Sparse Autoencoders

arXiv cs.CL / 3/16/2026

📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • SteerRM introduces a training-free method for debiasing reward models by applying sparse autoencoder (SAE) interventions at inference time to suppress bias features.
  • It identifies bias-related SAE features using a strength-stability criterion on contrastive paired responses, enabling targeted suppression of superficial stylistic cues.
  • The approach improves Hard-split accuracy by an average of 7.3 points across six reward models on RM-Bench while preserving overall performance, and generalizes to a Gemma-based RM and other bias types.
  • Findings show that format-related bias features are concentrated in shallow layers and transfer across models, indicating shared architecture-level bias encoding patterns.
  • SteerRM provides a practical, interpretable solution for alignment pipelines without retraining, reducing deployment friction for debiasing in RM systems.

Abstract

Reward models (RMs) are critical components of alignment pipelines, yet they exhibit biases toward superficial stylistic cues, preferring better-presented responses over semantically superior ones. Existing debiasing methods typically require retraining or architectural modifications, while direct activation suppression degrades performance due to representation entanglement. We propose SteerRM, the first training-free method for debiasing reward models using Sparse Autoencoder (SAE)-based interventions. SteerRM isolates stylistic effects using contrastive paired responses, identifies bias-related SAE features with a strength-stability criterion, and suppresses them at inference time. Across six reward models on RM-Bench, SteerRM improves Hard-split accuracy by 7.3 points on average while preserving overall performance. Results on a Gemma-based reward model and a controlled non-format bias further suggest generalization across RM architectures and bias types. We further find that format-related features are concentrated in shallow layers and transfer across models, revealing shared architecture-level bias encoding patterns. These results show that SAE-based interventions can mitigate reward-model biases without retraining, providing a practical and interpretable solution for alignment pipelines.