AI Navigate

ConFu: Contemplate the Future for Better Speculative Sampling

arXiv cs.CL / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • Speculative decoding speeds up large language model inference by using lightweight draft models to propose tokens, but current methods suffer from error accumulation as they only consider the current prefix.
  • ConFu (Contemplate the Future) is a new speculative decoding framework that enables draft models to anticipate future token generation direction, improving prediction quality.
  • It introduces contemplate tokens, soft prompts, a dynamic MoE contemplate token mechanism, and a training framework for robust future prediction.
  • Experiments show ConFu improves token acceptance rates and generation speed by 8-11% compared to EAGLE-3 on Llama-3 3B and 8B models across various tasks.
  • This work is the first to connect speculative decoding with continuous reasoning tokens, offering a new path to accelerate large language model inference.

Computer Science > Computation and Language

arXiv:2603.08899 (cs)
[Submitted on 9 Mar 2026]

Title:ConFu: Contemplate the Future for Better Speculative Sampling

View a PDF of the paper titled ConFu: Contemplate the Future for Better Speculative Sampling, by Zongyue Qin and 5 other authors
View PDF HTML (experimental)
Abstract:Speculative decoding has emerged as a powerful approach to accelerate large language model (LLM) inference by employing lightweight draft models to propose candidate tokens that are subsequently verified by the target model. The effectiveness of this paradigm critically depends on the quality of the draft model. While recent advances such as the EAGLE series achieve state-of-the-art speedup, existing draft models remain limited by error accumulation: they condition only on the current prefix, causing their predictions to drift from the target model over steps. In this work, we propose \textbf{ConFu} (Contemplate the Future), a novel speculative decoding framework that enables draft models to anticipate the future direction of generation. ConFu introduces (i) contemplate tokens and soft prompts that allow the draft model to leverage future-oriented signals from the target model at negligible cost, (ii) a dynamic contemplate token mechanism with MoE to enable context-aware future prediction, and (iii) a training framework with anchor token sampling and future prediction replication that learns robust future prediction. Experiments demonstrate that ConFu improves token acceptance rates and generation speed over EAGLE-3 by 8--11% across various downstream tasks with Llama-3 3B and 8B models. We believe our work is the first to bridge speculative decoding with continuous reasoning tokens, offering a new direction for accelerating LLM inference.
Comments:
Subjects: Computation and Language (cs.CL)
Cite as: arXiv:2603.08899 [cs.CL]
  (or arXiv:2603.08899v1 [cs.CL] for this version)
  https://doi.org/10.48550/arXiv.2603.08899
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Raghavv Goel Mr. [view email]
[v1] Mon, 9 Mar 2026 20:11:06 UTC (1,662 KB)
Full-text links:

Access Paper:

Current browse context:
cs.CL
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.