AI Navigate

Adaptive Decoding via Test-Time Policy Learning for Self-Improving Generation

arXiv cs.CL / 3/20/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • It introduces a reinforcement learning-based decoder sampler that learns a lightweight test-time policy to adjust sampling parameters while freezing LLM weights.
  • The policy treats decoding as sequential decision-making and achieves large gains over greedy and static baselines on summarization datasets like BookSum, arXiv, and WikiHow across Granite-3.3-2B and Qwen-2.5-0.5B.
  • Reward design experiments show composite rewards with shaping terms (length, coverage, repetition, completeness) outperform overlap-only objectives and enable stable improvements.
  • The work demonstrates test-time adaptation via RL as a practical mechanism for domain-aware, user-controllable generation without retraining large models.

Abstract

Decoding strategies largely determine the quality of Large Language Model (LLM) outputs, yet widely used heuristics such as greedy or fixed temperature/top-p decoding are static and often task-agnostic, leading to suboptimal or inconsistent generation quality across domains that demand stylistic or structural flexibility. We introduce a reinforcement learning-based decoder sampler that treats decoding as sequential decision-making and learns a lightweight policy to adjust sampling parameters at test-time while keeping LLM weights frozen. We evaluated summarization datasets including BookSum, arXiv, and WikiHow using Granite-3.3-2B and Qwen-2.5-0.5B. Our policy sampler consistently outperforms greedy and static baselines, achieving relative gains of up to +88% (BookSum, Granite) and +79% (WikiHow, Qwen). Reward ablations show that overlap-only objectives underperform compared to composite rewards, while structured shaping terms (length, coverage, repetition, completeness) enable stable and sustained improvements. These findings highlight reinforcement learning as a practical mechanism for test-time adaptation in decoding, enabling domain-aware and user-controllable generation without retraining large models.