IRIS: Interpolative R\'enyi Iterative Self-play for Large Language Model Fine-Tuning

arXiv cs.LG / 4/24/2026

📰 NewsModels & Research

Key Points

  • IRIS (Interpolative Rényi Iterative Self-play) is a new self-play fine-tuning framework for large language models that uses Rényi-based objectives with a continuously adjustable divergence order parameter (α).
  • The method decomposes training into two tilted-risk terms over annotated (human-labeled) and synthetic (self-generated) data, using exponential importance weights governed by α.
  • By interpreting prior self-play objectives as special cases or limiting regimes at specific α values, the paper provides a unified theoretical view and motivates an adaptive α schedule that shifts from sharper weighting early to smoother refinement near convergence.
  • The authors theoretically analyze IRIS’s fixed-point property and how α affects gradient concentration, and empirically validate improvements on Zephyr-7B and Qwen2.5-3B across 10 benchmarks.
  • Experiments report an average score of 44.57% with iterative gains, and show that using only 26k annotated samples with IRIS can outperform standard supervised fine-tuning trained on the full 200k dataset.

Abstract

Self-play fine-tuning enables large language models to improve beyond supervised fine-tuning without additional human annotations by contrasting annotated responses with self-generated ones. Many existing methods rely on a fixed divergence regime. SPIN is closely related to a KL-based regime, SPACE to a Jensen-Shannon-style objective via noise contrastive estimation, and SPIF to \chi^2-regularized self-play. Since these divergences exhibit different strengths depending on the distributional gap between model and target, no single choice appears to provide favorable learning dynamics across training stages. We propose IRIS (Interpolative R\'enyi Iterative Self-play), a R\'enyi-based self-play fine-tuning framework with a continuously adjustable objective. IRIS decomposes into two independent tilted risk terms over annotated and synthetic data, with exponential importance weights controlled by the order parameter \alpha. We show that several self-play objectives can be interpreted as limiting or representative regimes at particular values of \alpha, providing a unified theoretical perspective on these methods. An adaptive order schedule further adjusts \alpha to the distributional gap, shifting from sharper importance weighting early in training to smoother refinement near convergence. Theoretically, we establish the fixed-point property of IRIS and analyze how \alpha controls gradient concentration. Experiments on Zephyr-7B and Qwen2.5-3B across ten benchmarks show that IRIS improves upon baselines, reaching 44.57\% average score with gains across iterations. In our setting, IRIS with only 26k annotated samples surpasses standard supervised fine-tuning trained on the full 200k dataset.