AI Navigate

Efficiently Aligning Draft Models via Parameter- and Data-Efficient Adaptation

arXiv cs.LG / 3/11/2026

Tools & Practical UsageModels & Research

Key Points

  • Speculative decoding speeds up LLM inference but suffers performance drops when target models are fine-tuned for specific domains.
  • The paper introduces Efficient Draft Adaptation (EDA), a parameter- and data-efficient framework for adapting draft models without costly retraining.
  • EDA features a decoupled architecture with shared and private components, a data regeneration strategy using fine-tuned models, and a sample selection mechanism prioritizing valuable data.
  • Experiments demonstrate that EDA restores speculative decoding performance on fine-tuned models with significantly reduced training costs and improved average acceptance lengths.
  • The authors provide open-source code to facilitate adoption and further research in efficient draft model adaptation frameworks.

Computer Science > Machine Learning

arXiv:2603.09527 (cs)
[Submitted on 10 Mar 2026]

Title:Efficiently Aligning Draft Models via Parameter- and Data-Efficient Adaptation

View a PDF of the paper titled Efficiently Aligning Draft Models via Parameter- and Data-Efficient Adaptation, by Luxi Lin and 7 other authors
View PDF HTML (experimental)
Abstract:Speculative decoding accelerates LLM inference but suffers from performance degradation when target models are fine-tuned for specific domains. A naive solution is to retrain draft models for every target model, which is costly and inefficient. To address this, we introduce a parameter- and data-efficient framework named Efficient Draft Adaptation, abbreviated as EDA, for efficiently adapting draft models. EDA introduces three innovations: (1) a decoupled architecture that utilizes shared and private components to model the shared and target-specific output distributions separately, enabling parameter-efficient adaptation by updating only the lightweight private component;(2) a data regeneration strategy that utilizes the fine-tuned target model to regenerate training data, thereby improving the alignment between training and speculative decoding, leading to higher average acceptance length;(3) a sample selection mechanism that prioritizes high-value data for efficient adaptation. Our experiments show that EDA effectively restores speculative performance on fine-tuned models, achieving superior average acceptance lengths with significantly reduced training costs compared to full retraining. Code is available at this https URL.
Comments:
Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI)
Cite as: arXiv:2603.09527 [cs.LG]
  (or arXiv:2603.09527v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2603.09527
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Luxi Lin [view email]
[v1] Tue, 10 Mar 2026 11:35:58 UTC (1,339 KB)
Full-text links:

Access Paper:

Current browse context:
cs.LG
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
IArxiv recommender toggle
IArxiv Recommender (What is IArxiv?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.