Synthesizing Instruction-Tuning Datasets with Contrastive Decoding

arXiv cs.CL / 4/16/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that LLM-generated responses used for instruction tuning often mix up pre-training world knowledge with post-training instruction-following skills, reducing the purity of the resulting instruction-tuning signal.
  • It introduces CoDIT, which uses contrastive decoding between a post-trained model and its pre-trained counterpart to suppress shared pre-trained knowledge while amplifying instruction-following behavior during response generation.
  • Experiments show that instruction-tuning datasets synthesized with CoDIT lead to consistently better downstream model performance than datasets built from directly generated responses.
  • The authors report that CoDIT-built training data also outperforms several existing public instruction-tuning datasets across multiple benchmarks.
  • They provide theoretical and empirical evidence that CoDIT can be viewed as transferring (distilling) instruction-following “chat vector” information from parameter space to text space, enabling capability transfer across differing model architectures.

Abstract

Using responses generated by high-performing large language models (LLMs) for instruction tuning has become a widely adopted approach. However, the existing literature overlooks a property of LLM-generated responses: they conflate world knowledge acquired during pre-training with instruction-following capabilities acquired during post-training. We hypothesize that disentangling the instruction-following capabilities from pre-trained knowledge improves the effectiveness of instruction tuning. To this end, we propose CoDIT, a method that applies contrastive decoding between a post-trained model and its pre-trained counterpart during response generation. The method suppresses pre-trained knowledge shared between the two models while amplifying the instruction-following behavior acquired via post-training, resulting in responses that more purely reflect instruction-following capabilities. Experiment results demonstrate that models trained on datasets constructed via CoDIT consistently outperform those trained on directly generated responses. Training on our datasets also yields better performance than on existing publicly available instruction-tuning datasets across multiple benchmarks. Furthermore, we theoretically and empirically show that CoDIT can be interpreted as distilling the chat vector from parameter space to text space, enabling the transfer of instruction-tuning capabilities across models of different architectures.