An Empirical Study of SFT-DPO Interaction and Parameterization in Small Language Models

arXiv cs.CL / 3/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper systematically compares SFT-only, DPO-only, staged SFT-to-DPO, FFT, and LoRA on a GPT-2-scale decoder across paraphrase detection and Shakespearean sonnet continuation.
  • DPO yields small, task-dependent gains over strong SFT and can match competitive SFT accuracy without a warm start when the preference construction closely parallels the supervised objective.
  • Parameterization dominates: FFT consistently outperforms LoRA at matched training depth, and LoRA does not reduce wall-clock time on the authors' hardware.
  • In this small-scale regime, supervised full-parameter adaptation remains the primary performance lever, with preference optimization and low-rank adaptation providing limited marginal returns.
  • The findings imply that for small backbones, focusing on full-parameter tuning is more impactful than relying on DPO or LoRA for gains.

Abstract

Direct Preference Optimization (DPO) is widely used after supervised fine-tuning (SFT) to align language models, yet empirical behavior under small backbones and modest data is under-specified. We systematically compare SFT-only, DPO-only, and staged SFT-to-DPO training alongside full fine-tuning (FFT) versus LoRA on a GPT-2-scale decoder, evaluating paraphrase detection and Shakespearean sonnet continuation. DPO yields small, task-dependent gains over strong SFT and can match competitive SFT accuracy without a warm start when the preference construction closely parallels the supervised objective. In contrast, parameterization dominates: FFT consistently outperforms LoRA at matched training depth, and LoRA does not reduce wall-clock time on our hardware. These findings indicate that, in this small-scale regime, supervised full-parameter adaptation remains the primary performance lever, while preference optimization and low-rank adaptation provide limited marginal returns.