Learning Permutation Distributions via Reflected Diffusion on Ranks
arXiv cs.LG / 3/19/2026
📰 NewsModels & Research
Key Points
- It proposes Soft-Rank Diffusion, a discrete diffusion framework that models permutations by lifting them to a continuous soft-rank representation to enable smoother, more tractable trajectories.
- The forward process replaces shuffle-based corruption with a structured soft-rank noise, addressing challenges from factorial growth in S_n.
- The reverse process introduces contextualized generalized Plackett-Luce (cGPL) denoisers to better capture sequential decision structures in permutations.
- Experiments on sorting and combinatorial optimization show Soft-Rank Diffusion outperforming prior diffusion baselines, with strong gains in long-sequence settings.
- The approach promises improved permutation modeling for tasks like ranking and scheduling, where scalable, accurate distributions over permutations are valuable.
Related Articles

**Core Allocation Optimization for Energy‑Efficient Multi‑Core Scheduling in ARINC650 Systems**
Dev.to

LongCat-Flash-Prover: A new frontier for Open-Source Formal Reasoning.
Reddit r/LocalLLaMA

composer 2 is just Kimi K2.5 with RL?????
Reddit r/LocalLLaMA

Built a small free iOS app to reduce LLM answer uncertainty with multiple models
Dev.to
![[P] We built a Weights & Biases for Autoresearch - track steps, compare experiments, and share results](/_next/image?url=https%3A%2F%2Fpreview.redd.it%2Flv7w6809f7qg1.png%3Fwidth%3D140%26height%3D75%26auto%3Dwebp%26s%3De77e7b54776d5a33eb092415d26190352ad20577&w=3840&q=75)
[P] We built a Weights & Biases for Autoresearch - track steps, compare experiments, and share results
Reddit r/MachineLearning