Manifold-Optimal Guidance: A Unified Riemannian Control View of Diffusion Guidance
arXiv cs.CV / 3/13/2026
📰 NewsModels & Research
Key Points
- MOG identifies a geometric mismatch in classifier-free guidance that can cause sampling trajectories to drift off the data manifold.
- It reframes diffusion guidance as a local optimal control problem and delivers a closed-form, geometry-aware Riemannian update that corrects off-manifold drift without retraining.
- Auto-MOG introduces a dynamic energy-balancing schedule that adaptively calibrates guidance strength and eliminates the need for manual hyperparameter tuning.
- The approach requires no retraining and imposes virtually no additional computational overhead.
- Extensive validation demonstrates superior fidelity and alignment relative to baselines.
Related Articles

I made a 'benchmark' where LLMs write code controlling units in a 1v1 RTS game.
Dev.to

My AI Does Not Have a Clock
Dev.to
How to settle on a coding LLM ? What parameters to watch out for ?
Reddit r/LocalLLaMA

Andrej Karpathy's autonomous AI research agent ran 700 experiments in 2 days and gave a glimpse of where AI is heading
Reddit r/artificial

So cursor admits that Kimi K2.5 is the best open source model
Reddit r/LocalLLaMA