Minimizing Collateral Damage in Activation Steering

arXiv cs.LG / 5/5/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Activation steering can shape LLM behavior by modifying internal activations to align with a chosen feature direction, but common intervention methods can introduce “collateral damage” to other feature directions.
  • The authors show that this collateral damage arises because standard techniques implicitly assume the non-target features are isotropic, which is often not true.
  • They formalize collateral damage mathematically and recast steering as a constrained optimization problem to systematically control the side effects.
  • The proposed method selects an activation that minimizes the expected squared collateral change using a weighting derived from the empirical activation second-moment matrix, enabling non-uniform penalties across feature directions.
  • By leveraging this empirical second-moment weighting, the approach aims to improve steering precision while reducing performance degradation on unrelated tasks.

Abstract

Activation steering is a method for controlling Large Language Model (LLM) behavior by intervening in its internal representations to increase the alignment with a specific target feature direction. However, standard interventions, such as vector addition, often cause ``collateral damage", defined as unintended changes in the alignment of activations along other non-target feature directions. This damage occurs because standard methods implicitly assume the isotropy of non-target features. In this work, we provide a mathematical formalization of collateral damage and introduce a principled framework that models steering as a constrained optimization problem. Our method finds a new activation that minimizes the expected squared collateral change weighted by the empirical second-moment matrix of activations. This weighting encodes the nonuniform cost of the perturbation in different feature directions, in contrast to isotropic approaches that penalize changes uniformly in all feature directions. By accounting for the empirical second-moment of activations, our approach achieves more precise control while reducing the degradation of model performance on unrelated tasks.