Turning the TIDE: Cross-Architecture Distillation for Diffusion Large Language Models

arXiv cs.CL / 4/30/2026

📰 NewsModels & Research

Key Points

  • The paper introduces TIDE, a new framework for cross-architecture distillation of diffusion LLMs, addressing the gap left by prior methods that only distill within the same architecture.
  • TIDE uses three modular techniques—TIDAL to adapt distillation strength by both training progress and diffusion timestep, CompDemo to improve teacher predictions under heavy masking, and Reverse CALM to handle cross-tokenizer learning with stable, bounded gradients.
  • Experiments distill large teachers (8B dense and 16B MoE) into a small 0.6B student using heterogeneous pipelines, achieving an average +1.53 points improvement over the baseline across eight benchmarks.
  • The strongest gains appear in code generation, where HumanEval reaches 48.78 versus 32.3 for the AR baseline.
  • The work suggests cross-architecture teacher-student transfer can retain high performance in diffusion LLMs while greatly reducing model size and inference cost.

Abstract

Diffusion large language models (dLLMs) offer parallel decoding and bidirectional context, but state-of-the-art dLLMs require billions of parameters for competitive performance. While existing distillation methods for dLLMs reduce inference steps within a single architecture, none address cross-architecture knowledge transfer, in which the teacher and student differ in architecture, attention mechanism, and tokenizer. We present TIDE, the first framework for cross-architecture dLLM distillation, comprising three modular components: (1) TIDAL, which jointly modulates distillation strength across training progress and diffusion timestep to account for the teacher's noise-dependent reliability; (2) CompDemo, which enriches the teacher's context via complementary mask splitting to improve predictions under heavy masking; and (3) Reverse CALM, a cross-tokenizer objective that inverts chunk-level likelihood matching, yielding bounded gradients and dual-end noise filtering. Distilling 8B dense and 16B MoE teachers into a 0.6B student via two heterogeneous pipelines outperforms the baseline by an average of 1.53 points across eight benchmarks, yielding notable gains in code generation, where HumanEval scores reach 48.78 compared to 32.3 for the AR baseline.