Length Value Model: Scalable Value Pretraining for Token-Level Length Modeling

arXiv cs.CL / 5/1/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces the Length Value Model (LenVM), a token-level approach for modeling remaining generation length in autoregressive LLMs, addressing the lack of fine-grained length supervision in prior work.
  • LenVM casts length modeling as a value estimation problem using a constant negative reward per generated token, producing a bounded, discounted return that acts as a monotonic proxy for the remaining generation horizon.
  • The method provides dense, annotation-free, and scalable supervision, and experiments show LenVM delivers strong inference-time signals across both LLMs and VLMs.
  • On the LIFEBench exact length-matching task, applying LenVM to a 7B model boosts length score from 30.9 to 64.8 and outperforms frontier closed-source models.
  • LenVM also enables controllable trade-offs between performance and efficiency, preserving 63% GSM8K accuracy at a 200-token budget (vs. 6% for a token-budget baseline) and providing interpretable token-level signals about reasoning length regimes.

Abstract

Token serves as the fundamental unit of computation in modern autoregressive models, and generation length directly influences both inference cost and reasoning performance. Despite its importance, existing approaches lack fine-grained length modeling, operating primarily at the coarse-grained sequence level. We introduce the Length Value Model (LenVM), a token-level framework that models the remaining generation length. By formulating length modeling as a value estimation problem and assigning a constant negative reward to each generated token, LenVM predicts a bounded, discounted return that serves as a monotone proxy for the remaining generation horizon. This formulation yields supervision that is annotation-free, dense, unbiased, and scalable. Experiments on LLMs and VLMs demonstrate LenVM provides a highly effective signal at inference time. On the LIFEBench exact length matching task, applying LenVM to a 7B model improves the length score from 30.9 to 64.8, significantly outperforming frontier closed-source models. Furthermore, LenVM enables continuous control over the trade off between performance and efficiency. On GSM8K at a budget of 200 tokens, LenVM maintains 63% accuracy compared to 6 percent for token budget baseline. It also accurately predicts total generation length from the prompt boundary. Finally, LenVM's token-level values offer an interpretable view of generation dynamics, revealing how specific tokens shift reasoning toward shorter or longer regimes. Results demonstrate that LenVM supports a broad range of applications and token length can be effectively modeled as a token-level value signal, highlighting the potential of LenVM as a general framework for length modeling and as a length-specific value signal that could support future RL training. Code is available at https://github.com/eric-ai-lab/Length-Value-Model.