テスト時訓練を超えて:ハードウェア効率の良い最適制御を通じた推論学習

arXiv cs.LG / 2026/3/11

Developer Stack & InfrastructureIdeas & Deep AnalysisModels & Research

要点

  • 本論文は、言語モデルにおける推論を最適制御問題として定式化し、推論時に潜在状態上で有限ホライゾンLQR計画を行うTest-Time Control(TTC)層を導入する新しいアプローチを提案しています。
  • TTC層はニューラルアーキテクチャ内に価値関数を直接統合し、外部計画やテスト時訓練に頼ることなく、予測前の計画を可能にします。
  • ハードウェア効率とスケーラビリティを確保するため、著者らはシンプレクティックLQRソルバーを開発し、融合されたCUDAカーネルとして実装することで最小限のランタイムオーバーヘッドでの並列実行を可能にしています。
  • 事前学習済み大型言語モデルにアダプターとして組み込むことで、TTCは数学的推論能力を大幅に向上させ、MATH-500で+27.8%の性能向上、AMCおよびAIMEベンチマークで2-3倍のPass@8改善を達成しています。
  • 本研究は、最適制御をアーキテクチャの一部として埋め込むことが、従来のテスト時訓練を超えた推論能力向上に有効かつスケーラブルな手段であることを示しています。

Computer Science > Machine Learning

arXiv:2603.09221 (cs)
[Submitted on 10 Mar 2026]

Title:Beyond Test-Time Training: Learning to Reason via Hardware-Efficient Optimal Control

View a PDF of the paper titled Beyond Test-Time Training: Learning to Reason via Hardware-Efficient Optimal Control, by Peihao Wang and 10 other authors
View PDF HTML (experimental)
Abstract:Associative memory has long underpinned the design of sequential models. Beyond recall, humans reason by projecting future states and selecting goal-directed actions, a capability that modern language models increasingly require but do not natively encode. While prior work uses reinforcement learning or test-time training, planning remains external to the model architecture. We formulate reasoning as optimal control and introduce the Test-Time Control (TTC) layer, which performs finite-horizon LQR planning over latent states at inference time, represents a value function within neural architectures, and leverages it as the nested objective to enable planning before prediction. To ensure scalability, we derive a hardware-efficient LQR solver based on a symplectic formulation and implement it as a fused CUDA kernel, enabling parallel execution with minimal overhead. Integrated as an adapter into pretrained LLMs, TTC layers improve mathematical reasoning performance by up to +27.8% on MATH-500 and 2-3x Pass@8 improvements on AMC and AIME, demonstrating that embedding optimal control as an architectural component provides an effective and scalable mechanism for reasoning beyond test-time training.
Subjects: Machine Learning (cs.LG)
Cite as: arXiv:2603.09221 [cs.LG]
  (or arXiv:2603.09221v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2603.09221
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Peihao Wang [view email]
[v1] Tue, 10 Mar 2026 05:42:13 UTC (4,285 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled Beyond Test-Time Training: Learning to Reason via Hardware-Efficient Optimal Control, by Peihao Wang and 10 other authors
  • View PDF
  • HTML (experimental)
  • TeX Source
Current browse context:
cs.LG
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
IArxiv recommender toggle
IArxiv Recommender (What is IArxiv?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.