AI Navigate

Learning Adaptive LLM Decoding

arXiv cs.LG / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • The paper proposes learning adaptive decoding policies for large language models (LLMs) that dynamically adjust sampling strategies during inference based on task difficulty and compute budget.
  • Instead of fine-tuning the LLM itself, lightweight decoding adapters are trained via reinforcement learning with terminal rewards like task correctness, improving inference efficiency.
  • Decoding is modeled at two levels: a sequence-level contextual bandit chooses decoding strategies per prompt, and a token-level partially observable Markov decision process (POMDP) selects sampling actions per token.
  • Experiments on MATH and CodeContests benchmarks demonstrate that these adaptive decoding adapters significantly improve accuracy given fixed computational budgets compared to static decoding methods.
  • Ablation studies confirm that both sequence-level and token-level adaptation contribute to enhanced decoding performance and better accuracy-efficiency tradeoffs.

Computer Science > Machine Learning

arXiv:2603.09065 (cs)
[Submitted on 10 Mar 2026]

Title:Learning Adaptive LLM Decoding

View a PDF of the paper titled Learning Adaptive LLM Decoding, by Chloe H. Su and 5 other authors
View PDF HTML (experimental)
Abstract:Decoding from large language models (LLMs) typically relies on fixed sampling hyperparameters (e.g., temperature, top-p), despite substantial variation in task difficulty and uncertainty across prompts and individual decoding steps. We propose to learn adaptive decoding policies that dynamically select sampling strategies at inference time, conditioned on available compute resources. Rather than fine-tuning the language model itself, we introduce lightweight decoding adapters trained with reinforcement learning and verifiable terminal rewards (e.g. correctness on math and coding tasks). At the sequence level, we frame decoding as a contextual bandit problem: a policy selects a decoding strategy (e.g. greedy, top-k, min-p) for each prompt, conditioned on the prompt embedding and a parallel sampling budget. At the token level, we model decoding as a partially observable Markov decision process (POMDP), where a policy selects sampling actions at each token step based on internal model features and the remaining token budget. Experiments on the MATH and CodeContests benchmarks show that the learned adapters improve the accuracy-budget tradeoff: on MATH, the token-level adapter improves Pass@1 accuracy by up to 10.2% over the best static baseline under a fixed token budget, while the sequence-level adapter yields 2-3% gains under fixed parallel sampling. Ablation analyses support the contribution of both sequence- and token-level adaptation.
Subjects: Machine Learning (cs.LG)
Cite as: arXiv:2603.09065 [cs.LG]
  (or arXiv:2603.09065v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2603.09065
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Huangyuan Su [view email]
[v1] Tue, 10 Mar 2026 01:15:26 UTC (2,511 KB)
Full-text links:

Access Paper:

Current browse context:
cs.LG
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
IArxiv recommender toggle
IArxiv Recommender (What is IArxiv?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.