TEMPO: Scaling Test-time Training for Large Reasoning Models
arXiv cs.LG / 4/22/2026
📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research
Key Points
- The paper studies test-time training (TTT) for large reasoning models and finds that existing methods quickly plateau because their self-generated reward signal drifts as the policy changes at inference.
- It proposes TEMPO, which alternates between refining the policy on unlabeled test questions and periodically recalibrating a critic using a labeled dataset.
- The authors show that this alternating procedure can be formalized with the Expectation-Maximization (EM) algorithm, revealing earlier approaches as incomplete variants that skip the key critic recalibration step.
- Reintroducing critic recalibration improves the evidence lower bound (ELBO) and enables sustained gains even when more test-time compute is available.
- Experiments across model families and reasoning benchmarks report large accuracy jumps (e.g., OLMO3-7B on AIME 2024 from 33.0% to 51.1%, Qwen3-14B from 42.3% to 65.8%) while preserving high output diversity.
Related Articles

The 67th Attempt: When Your "Knowledge Management" System Becomes a Self-Fulfilling Prophecy of Excellence
Dev.to

Context Engineering for Developers: A Practical Guide (2026)
Dev.to

GPT-5.5 is here. So is DeepSeek V4. And honestly, I am tired of version numbers.
Dev.to

I Built an AI Image Workflow with GPT Image 2.0 (+ Fixing Its Biggest Flaw)
Dev.to
Max-and-Omnis/Nemotron-3-Super-64B-A12B-Math-REAP-GGUF
Reddit r/LocalLLaMA