Why Supervised Fine-Tuning Fails to Learn: A Systematic Study of Incomplete Learning in Large Language Models

arXiv cs.CL / 4/14/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper identifies a persistent failure mode in supervised fine-tuning (SFT) of large language models: even after training convergence, models may not correctly reproduce a subset of their supervised training instances, termed the Incomplete Learning Phenomenon (ILP).
  • ILP is shown to be widespread across multiple LLM families, domains, and datasets, and aggregate evaluation metrics can hide these persistent “unlearned” subsets.
  • The authors formalize ILP as post-training failure to internalize supervised instances and propose a diagnostic-first framework that classifies unlearned samples into observable, recurrent causes.
  • Five key sources of incomplete learning are identified: missing prerequisite knowledge, conflicts with pre-training knowledge, internal inconsistencies in SFT data, left-side forgetting during sequential fine-tuning, and insufficient optimization for rare or complex patterns.
  • The study also examines mitigation strategies as causal interventions, using experiments on models including Qwen, LLaMA, and OLMo2 to demonstrate heterogeneous behavior and targeted improvements.

Abstract

Supervised Fine-Tuning (SFT) is the standard approach for adapting large language models (LLMs) to downstream tasks. However, we observe a persistent failure mode: even after convergence, models often fail to correctly reproduce a subset of their own supervised training data. We refer to this behavior as the Incomplete Learning Phenomenon(ILP). This paper presents the first systematic study of ILP in LLM fine-tuning. We formalize ILP as post-training failure to internalize supervised instances and demonstrate its prevalence across multiple model families, domains, and datasets. Through controlled analyses, we identify five recurrent sources of incomplete learning: (1) missing prerequisite knowledge in the pre-trained model, (2) conflicts between SFT supervision and pre-training knowledge, (3) internal inconsistencies within SFT data, (4) left-side forgetting during sequential fine-tuning, and (5) insufficient optimization for rare or complex patterns. We introduce a diagnostic-first framework that maps unlearned samples to these causes using observable training and inference signals, and study several targeted mitigation strategies as causal interventions. Experiments on Qwen, LLaMA, and OLMo2 show that incomplete learning is widespread and heterogeneous, and that improvements in aggregate metrics can mask persistent unlearned subsets. The findings highlight the need for fine-grained diagnosis of what supervised fine-tuning fails to learn, and why.