What Makes Good Instruction-Tuning Data? An In-Context Learning Perspective

arXiv cs.CL / 4/29/2026

💬 OpinionModels & Research

Key Points

  • The paper argues that instruction-tuning datasets often include redundant and low-quality samples, so selecting high-value data is crucial.
  • It introduces a weighted in-context influence (wICI) framework that estimates how much each candidate example reduces instruction-following difficulty for semantically related examples.
  • The study experimentally investigates what “good” instruction-tuning data looks like from an in-context learning viewpoint and tests relationships between sample difficulty, in-context influence, and instruction-tuning effectiveness.
  • Experiments across multiple models and benchmarks show the proposed selection method outperforms existing baselines when data budgets are limited.
  • The results also indicate that sample difficulty is negatively correlated with in-context influence, linking the selection signal to downstream performance gains.

Abstract

Instruction-tuning datasets often contain substantial redundancy and low-quality samples, necessitating effective data selection methods. We propose an instruction data selection framework based on weighted in-context influence (wICI), which measures how effectively each candidate example reduces instruction-following difficulty for semantically related peers. Through systematic experiments, we address three key questions: what constitutes effective instruction tuning data from an in-context perspective, whether sample difficulty correlates with in-context influence, and how in-context influence translates to instruction tuning effectiveness. Experiments across multiple models and benchmarks demonstrate that our method consistently outperforms existing baselines under constrained data budgets, while empirically showing that sample difficulty negatively correlates with in-context influence.