AI Navigate

When to Ensemble: Identifying Token-Level Points for Stable and Fast LLM Ensembling

arXiv cs.CL / 3/16/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • Ensembling LLMs at every token for long-form generation often degrades performance, highlighting the need for selective ensembling positions.
  • The SAFE framework identifies ensembling positions by jointly considering tokenization mismatch across models and consensus in their next-token probability distributions.
  • A probability sharpening strategy is introduced to prevent overly smooth ensemble distributions and to enable more confident token selections during ensembling.
  • Empirical results on benchmarks like MATH500 and BBH show SAFE achieves better accuracy and efficiency than existing methods, even when ensembling fewer than 1% of tokens.

Abstract

Ensembling Large Language Models (LLMs) has gained attention as a promising approach to surpass the performance of individual models by leveraging their complementary strengths. In particular, aggregating models' next-token probability distributions to select the next token has been shown to be effective in various tasks. However, while successful for short-form answers, its application to long-form generation remains underexplored. In this paper, we show that using existing ensemble methods in long-form generation requires a careful choice of ensembling positions, since the standard practice of ensembling at every token often degrades performance. We identify two key factors for determining the ensembling positions: tokenization mismatch across models and consensus in their next-token probability distributions. Based on this, we propose SAFE, (Stable And Fast LLM Ensembling), a framework that selectively ensembles by jointly considering these factors. To further improve stability, we apply a probability sharpening strategy when the ensemble distribution becomes overly smooth, enabling the selection of more confident tokens during ensembling. Our experiments on diverse benchmarks, including MATH500 and BBH, demonstrate that SAFE outperforms existing methods in both accuracy and efficiency, with gains achieved even when ensembling fewer than 1% of tokens.