AI Navigate

Beyond Test-Time Training: Learning to Reason via Hardware-Efficient Optimal Control

arXiv cs.LG / 3/11/2026

Developer Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents a novel approach to reasoning in language models by formulating it as an optimal control problem and introducing the Test-Time Control (TTC) layer that performs finite-horizon LQR planning over latent states at inference time.
  • The TTC layer integrates a value function directly within neural architectures, enabling the model to plan before making predictions, rather than relying on external planning or test-time training.
  • To ensure hardware efficiency and scalability, the authors develop a symplectic LQR solver implemented as a fused CUDA kernel, allowing parallel execution with minimal runtime overhead.
  • When integrated as an adapter into pretrained large language models, TTC significantly boosts mathematical reasoning capabilities, improving performance by +27.8% on MATH-500 and achieving 2-3x Pass@8 improvements on AMC and AIME benchmarks.
  • This work demonstrates that embedding optimal control as an architectural component offers an effective and scalable mechanism to enhance reasoning capabilities beyond traditional test-time training methods.

Computer Science > Machine Learning

arXiv:2603.09221 (cs)
[Submitted on 10 Mar 2026]

Title:Beyond Test-Time Training: Learning to Reason via Hardware-Efficient Optimal Control

View a PDF of the paper titled Beyond Test-Time Training: Learning to Reason via Hardware-Efficient Optimal Control, by Peihao Wang and 10 other authors
View PDF HTML (experimental)
Abstract:Associative memory has long underpinned the design of sequential models. Beyond recall, humans reason by projecting future states and selecting goal-directed actions, a capability that modern language models increasingly require but do not natively encode. While prior work uses reinforcement learning or test-time training, planning remains external to the model architecture. We formulate reasoning as optimal control and introduce the Test-Time Control (TTC) layer, which performs finite-horizon LQR planning over latent states at inference time, represents a value function within neural architectures, and leverages it as the nested objective to enable planning before prediction. To ensure scalability, we derive a hardware-efficient LQR solver based on a symplectic formulation and implement it as a fused CUDA kernel, enabling parallel execution with minimal overhead. Integrated as an adapter into pretrained LLMs, TTC layers improve mathematical reasoning performance by up to +27.8% on MATH-500 and 2-3x Pass@8 improvements on AMC and AIME, demonstrating that embedding optimal control as an architectural component provides an effective and scalable mechanism for reasoning beyond test-time training.
Subjects: Machine Learning (cs.LG)
Cite as: arXiv:2603.09221 [cs.LG]
  (or arXiv:2603.09221v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2603.09221
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Peihao Wang [view email]
[v1] Tue, 10 Mar 2026 05:42:13 UTC (4,285 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled Beyond Test-Time Training: Learning to Reason via Hardware-Efficient Optimal Control, by Peihao Wang and 10 other authors
  • View PDF
  • HTML (experimental)
  • TeX Source
Current browse context:
cs.LG
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
IArxiv recommender toggle
IArxiv Recommender (What is IArxiv?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.