On Stable Long-Form Generation: Benchmarking and Mitigating Length Volatility

arXiv cs.CL / 5/5/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces the VOLTBench benchmark to systematically measure “length volatility” in long-form text generation, focusing on output length instability rather than only single-generation quality.
  • Using attention-trace analysis, the authors probe internal model behaviors and identify common patterns that contribute to this length volatility.
  • They propose GLoBo (Stable Generation via Logits Boosting), a lightweight decoding-stage optimization that improves length accuracy and stability without any additional training.
  • Experiments on VOLTBench find that mainstream LLMs can show severe long-form generation instability, and the method improves mean output length by 148% while reducing length volatility by 69% while preserving generation quality.

Abstract

Large Language Models (LLMs) excel at long-context understanding but exhibit significant limitations in long-form generation. Existing studies primarily focus on single-generation quality, generally overlooking the volatility of the output. This volatility not only leads to significant computational costs but also severely impacts the models' reliable application. To address this gap, our work unfolds in three stages: benchmarking, probing, and mitigation. We first propose the VOlatility in Long-form Text Benchmark (VOLTBench), a novel heterogeneous-task benchmark designed to systematically quantify the length volatility of long-form generation. Subsequently, by analyzing attention traces, we conduct an in-depth probe to identify several common internal patterns that cause this volatility. Finally, to mitigate long-form output volatility, we propose Stable Generation via Logits Boosting (GLoBo), a lightweight decoding-stage optimization strategy, designed to significantly enhance both the length accuracy and stability of long-form generation without additional training. Extensive experiments on VOLTBench provide the first systematic confirmation of severe long-form output instability in mainstream models and validate that our proposed method successfully improves the mean output length of the base model by 148% and reduces the length volatility by 69%, while maintaining high generation quality.