Compute Aligned Training: Optimizing for Test Time Inference

arXiv cs.LG / 4/29/2026

📰 NewsModels & Research

Key Points

  • The paper argues that conventional post-training methods like SFT and RL optimize each sample’s likelihood under a base policy, which can be misaligned with test-time procedures that use aggregated or filtered outputs.
  • It introduces “Compute Aligned Training,” which reformulates the training objective so it matches the inference-time strategy, treating inference strategies as operators applied to the base policy.
  • The authors derive new loss functions that directly aim to maximize performance when specific test-time strategies are used.
  • They instantiate these losses for SFT and RL across several common test-time strategies and report empirical results showing substantially better test-time scaling than standard training.

Abstract

Scaling test-time compute has emerged as a powerful mechanism for enhancing Large Language Model (LLM) performance. However, standard post-training paradigms, Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL), optimize the likelihood of individual samples under a base policy, creating a misalignment with test time procedures that rely on aggregated or filtered outputs. In this work, we propose Compute Aligned Training, which aligns training objectives with test-time strategies. By conceptualizing inference strategies as operators on the base policy, we derive new loss functions that maximize performance when said strategies are applied. We instantiate such loss functions for SFT and RL across common test time strategies. Finally, we provide empirical evidence that this training method substantially improves test time scaling over standard training.