TRIMS: Trajectory-Ranked Instruction Masked Supervision for Diffusion Language Models

arXiv cs.CL / 4/3/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • 拡散言語モデル(DLM)は並列復号による低遅延生成が期待される一方で、標準訓練が「トークンがどの順で現れるか(復号軌道)」を明示的に監督しないため、訓練と推論でのミスマッチが効率低下につながると指摘しています。
  • 提案手法TRIMSは、既存のマスク拡散言語モデル(MDLM)学習に最小オーバーヘッドで「復号軌道に基づく監督」を組み込む、軽量な教師信号による軌道ガイド付き微調整フレームワークです。

Abstract

Diffusion language models (DLMs) offer a promising path toward low-latency generation through parallel decoding, but their practical efficiency depends heavily on the decoding trajectory. In practice, this advantage often fails to fully materialize because standard training does not provide explicit supervision over token reveal order, creating a train-inference mismatch that leads to suboptimal decoding behavior. We propose Trajectory-Ranked Instruction Masked Supervision (TRIMS), a simple trajectory-guided supervised fine-tuning framework that injects trajectory supervision into standard Masked Diffusion Language Model (MDLM) training with minimal overhead. Instead of relying on costly DLM-based distillation, TRIMS uses lightweight signals from an autoregressive teacher to guide a trajectory-aware masking strategy, encouraging the model to learn more effective decoding orders. Experiments on LLaDA and Dream across math and coding benchmarks show that TRIMS significantly improves the accuracy-parallelism trade-off over both standard MDLM training and train-free acceleration baselines, while achieving competitive performance with prior distillation-based approaches at substantially lower training cost. Further analysis shows that TRIMS leads to better decoding trajectories, validating the effectiveness of trajectory-guided supervision for DLMs.