Alternating Diffusion for Proximal Sampling with Zeroth Order Queries
arXiv cs.LG / 3/23/2026
📰 NewsModels & Research
Key Points
- It introduces a new approximate proximal sampler that operates solely with zeroth-order information of the potential function.
- The method treats the intermediate particle distribution as a Gaussian mixture, yielding a Monte Carlo score estimator from directly samplable distributions without learned models or auxiliary samplers.
- It avoids rejection sampling, supports flexible step sizes, and runs with a deterministic runtime budget.
- Theoretical results indicate exponential convergence of proximal sampling under isoperimetric conditions when score estimation error is well-controlled, with practical gains from multi-particle interactions and parallel computation.
Related Articles
Mercor competitor Deccan AI raises $25M, sources experts from India
Dev.to
Self-Supervised Temporal Pattern Mining for satellite anomaly response operations for extreme data sparsity scenarios
Dev.to
[D] Probabilistic Neuron Activation in Predictive Coding Algorithm using 1 Bit LLM Architecture
Reddit r/MachineLearning

nvidia/gpt-oss-puzzle-88B · Hugging Face
Reddit r/LocalLLaMA
deepseek-v3 vs claude sonnet for routine coding tasks — my real usage numbers
Reddit r/LocalLLaMA