Perturbation: A simple and efficient adversarial tracer for representation learning in language models

arXiv cs.CL / 3/26/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes “Perturbation,” a method for probing representation learning in language models by fine-tuning on a single adversarial example and tracking how that change “infects” other inputs.
  • It frames representations as “conduits for learning” rather than as fixed activation patterns, aiming to resolve a reported dilemma between overly restrictive geometric assumptions and trivializing representations.
  • The approach is described as assumption-light (no geometric constraints) and is claimed to avoid producing spurious representations in untrained models.
  • Experiments on trained LMs reportedly show structured transfer across multiple linguistic grain sizes, indicating that learned abstractions generalize in representation space.
  • Overall, the work provides a simple and efficient tracer for studying what representations LMs acquire through training experience rather than through imposed structure.

Abstract

Linguistic representation learning in deep neural language models (LMs) has been studied for decades, for both practical and theoretical reasons. However, finding representations in LMs remains an unsolved problem, in part due to a dilemma between enforcing implausible constraints on representations (e.g., linearity; Arora et al. 2024) and trivializing the notion of representation altogether (Sutter et al., 2025). Here we escape this dilemma by reconceptualizing representations not as patterns of activation but as conduits for learning. Our approach is simple: we perturb an LM by fine-tuning it on a single adversarial example and measure how this perturbation ``infects'' other examples. Perturbation makes no geometric assumptions, and unlike other methods, it does not find representations where it should not (e.g., in untrained LMs). But in trained LMs, perturbation reveals structured transfer at multiple linguistic grain sizes, suggesting that LMs both generalize along representational lines and acquire linguistic abstractions from experience alone.