Relational Preference Encoding in Looped Transformer Internal States

arXiv cs.LG / 4/14/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • A new arXiv study analyzes how a 2.6B “looped transformer” (Ouro-2.6B-Thinking) encodes human preferences across iterative internal states using the Anthropic HH-RLHF dataset and frozen base weights.
  • Lightweight evaluator heads trained on per-iteration hidden states reach 95.2% test accuracy in a pairwise setting, outperforming a full-batch L-BFGS probe (84.5%) while the underlying model remains unchanged.
  • The authors find preference is encoded primarily in a relational manner: linear probes on pairwise differences perform well (84.5%), whereas independent nonlinear evaluators and independent classifiers are much weaker—suggesting internal consistency more than direct prediction of noisy labels.
  • Experiments and controls show architectural/optimization details can create misleading ceilings for pairwise vs pointwise evaluators, and a proposed “flip test” is presented as a mandatory diagnostic to detect evaluator bias and degenerate pairwise solutions.
  • A cosine learning-rate “dead zone” unintentionally functioned like early stopping, with test accuracy degrading substantially by later epochs, and cross-epoch analysis indicates antisymmetry stays stable while sign-flip rates track scorer bias.

Abstract

We investigate how looped transformers encode human preference in their internal iteration states. Using Ouro-2.6B-Thinking, a 2.6B-parameter looped transformer with iterative refinement, we extract hidden states from each loop iteration and train lightweight evaluator heads (~5M parameters) to predict human preference on the Anthropic HH-RLHF dataset. Our pairwise evaluator achieves 95.2% test accuracy on 8,552 unseen examples, surpassing a full-batch L-BFGS probe (84.5%) while the base model remains completely frozen. Our central finding is that loop states encode preference predominantly relationally: a linear probe on pairwise differences achieves 84.5%, the best nonlinear independent evaluator reaches only 65% test accuracy, and linear independent classification scores 21.75%, below chance and with inverted polarity. Interpreted precisely, the evaluator functions as a model-internal consistency probe, measuring how stably Ouro's own learned value system organizes its representations rather than how well it predicts noisy human annotations. We also document a systematic architecture search that established a genuine 70% ceiling for independent scoring, and show how the 50% argument-swap protocol required to prevent degenerate pairwise solutions deflated pairwise training metrics by about 31 points at peak, creating the false appearance that pairwise and pointwise evaluators shared the same ceiling. Finally, we show that a cosine learning-rate dead zone at epoch 2 accidentally acted as early stopping, preserving the generalization peak before overfitting degraded test accuracy from 95.2% to 62.4% by epoch 5. Cross-epoch flip-test analysis shows that antisymmetry correlation remains stable while strict sign-flip rate mainly tracks scorer bias. We propose the flip test as a mandatory diagnostic for pairwise preference evaluators.