Iterative Finetuning is Mostly Idempotent

arXiv cs.AI / 5/5/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates whether behavioral traits (e.g., sycophancy or misalignment) get amplified when a model is fine-tuned on data generated by its own predecessor, starting from an initial persona or belief.
  • Experiments across three training regimes—SFT on instruct models, SDF on base models, and DPO—find that in SFT and SDF most traits decay or stay constant, making repeated cycles largely idempotent.
  • Amplification is rare in non-RL fine-tuning, and when it does occur it typically reduces coherence, creating a practical deterrent to unchecked amplification.
  • For DPO, trait amplification can reliably happen under continual training that reinforces preferences for the model’s own outputs, but it disappears when models are reinitialized each cycle.
  • The authors conclude that amplification is most likely to come from continual post-training, and that limiting/controlling that stage may be an effective defense against self-reinforcing undesirable behaviors.

Abstract

If a model has some behavioral tendency, such as sycophancy or misalignment, and it is trained on its own outputs, will the tendency be amplified in the next generation of models? We study this question by training a series of models where each model is finetuned on data generated by its predecessor, and the initial model is seeded with some persona or belief. We test three settings: supervised finetuning (SFT) on instruct models, synthetic document finetuning (SDF) on base models, and direct preference optimization (DPO). In the SFT and SDF settings, traits mostly decay or remain constant so that further finetuning cycles do nothing. In rare cases when amplification occurs, it generally comes at the cost of coherence. In the DPO setting, trait amplification can reliably occur when a model is continually trained with a preference for its own outputs, but vanishes when models are reinitialized at each cycle. Overall, our results suggest that amplification most likely comes from continual post-training, and limiting this stage may be an effective defense. For non-RL finetuning, trait amplification is rare and very sensitive to data quantity, making it significantly less likely to occur accidentally. Finally, the amplification-coherence tradeoff serves as a natural deterrent against trait amplification.