AI Navigate

適応型LLMデコーディングの学習

arXiv cs.LG / 2026/3/11

Ideas & Deep AnalysisModels & Research

要点

  • 本論文では、タスクの難易度や計算リソースに応じて推論時に動的にサンプリング戦略を調整する大型言語モデル(LLM)向けの適応型デコーディング方策の学習を提案しています。
  • LLM本体のファインチューニングではなく、タスクの正確性などの終端報酬を用いた強化学習で軽量なデコーディングアダプターを訓練し、推論効率を向上させます。
  • デコーディングは2つのレベルでモデル化されており、シーケンスレベルではコンテキストバンディットがプロンプトごとにデコーディング戦略を選択し、トークンレベルでは部分観測マルコフ決定過程(POMDP)が各トークンのサンプリングアクションを選びます。
  • MATHおよびCodeContestsベンチマークでの実験により、これらの適応型デコーディングアダプターが固定計算予算下で静的なデコーディング手法に比べて精度を大幅に向上させることが示されました。
  • アブレーション研究により、シーケンスレベルおよびトークンレベルの両方の適応がデコーディング性能の向上と精度・効率のトレードオフ改善に寄与していることが確認されました。

Computer Science > Machine Learning

arXiv:2603.09065 (cs)
[Submitted on 10 Mar 2026]

Title:Learning Adaptive LLM Decoding

View a PDF of the paper titled Learning Adaptive LLM Decoding, by Chloe H. Su and 5 other authors
View PDF HTML (experimental)
Abstract:Decoding from large language models (LLMs) typically relies on fixed sampling hyperparameters (e.g., temperature, top-p), despite substantial variation in task difficulty and uncertainty across prompts and individual decoding steps. We propose to learn adaptive decoding policies that dynamically select sampling strategies at inference time, conditioned on available compute resources. Rather than fine-tuning the language model itself, we introduce lightweight decoding adapters trained with reinforcement learning and verifiable terminal rewards (e.g. correctness on math and coding tasks). At the sequence level, we frame decoding as a contextual bandit problem: a policy selects a decoding strategy (e.g. greedy, top-k, min-p) for each prompt, conditioned on the prompt embedding and a parallel sampling budget. At the token level, we model decoding as a partially observable Markov decision process (POMDP), where a policy selects sampling actions at each token step based on internal model features and the remaining token budget. Experiments on the MATH and CodeContests benchmarks show that the learned adapters improve the accuracy-budget tradeoff: on MATH, the token-level adapter improves Pass@1 accuracy by up to 10.2% over the best static baseline under a fixed token budget, while the sequence-level adapter yields 2-3% gains under fixed parallel sampling. Ablation analyses support the contribution of both sequence- and token-level adaptation.
Subjects: Machine Learning (cs.LG)
Cite as: arXiv:2603.09065 [cs.LG]
  (or arXiv:2603.09065v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2603.09065
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Huangyuan Su [view email]
[v1] Tue, 10 Mar 2026 01:15:26 UTC (2,511 KB)
Full-text links:

Access Paper:

Current browse context:
cs.LG
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
IArxiv recommender toggle
IArxiv Recommender (What is IArxiv?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.