TinyR1-32B-Preview: Boosting Accuracy with Branch-Merge Distillation

arXiv cs.CL / 4/30/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Branch-Merge distillation to compress large LLMs without sacrificing accuracy, using two stages: selective “Branch” distillation via domain-specific SFT and a “Merge” step to combine student models for cross-domain transfer.
  • The method addresses limitations of prior compression approaches like standard distillation and transfer learning, which often struggle to maintain high performance at smaller sizes.
  • Experiments use DeepSeek-R1 as the teacher and DeepSeek-R1-Distill-Qwen-32B as the student, producing TinyR1-32B-Preview as the merged model.
  • TinyR1-32B-Preview shows benchmark improvements over its counterpart across Mathematics (+5.5), Coding (+4.4), and Science (+2.9), and it remains close to DeepSeek-R1 on AIME 2024.
  • The authors argue the approach is scalable and reduces computation and time needed to build smaller, high-performing LLMs.

Abstract

The challenge of reducing the size of Large Language Models (LLMs) while maintaining their performance has gained significant attention. However, existing methods, such as model distillation and transfer learning, often fail to achieve high accuracy. To address this limitation, we introduce the Branch-Merge distillation approach, which enhances model compression through two phases: (1) the Branch Phase, where knowledge from a large teacher model is \textit{selectively distilled} into specialized student models via domain-specific supervised fine-tuning (SFT); And (2) the Merge Phase, where these student models are merged to enable cross-domain knowledge transfer and improve generalization. We validate our distillation approach using DeepSeek-R1 as the teacher and DeepSeek-R1-Distill-Qwen-32B as the student. The resulting merged model, TinyR1-32B-Preview, outperforms its counterpart DeepSeek-R1-Distill-Qwen-32B across multiple benchmarks, including Mathematics (+5.5 points), Coding (+4.4 points) and Science (+2.9 points), while achieving near-equal performance to DeepSeek-R1 on AIME 2024. The Branch-Merge distillation approach provides a scalable solution for creating smaller, high-performing LLMs with reduced computational cost and time.