AI Navigate

SiMPO: Measure Matching for Online Diffusion Reinforcement Learning

arXiv cs.LG / 3/12/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • SiMPO stands for Signed Measure Policy Optimization, a unified RL framework for diffusion policies that generalizes reweighting through arbitrary monotonic weighting functions.
  • The method uses a two-stage measure matching approach: first creating a virtual target policy via f-divergence regularized optimization that permits signed (potentially negative) target measures, then guiding diffusion or flow models with this signed measure through reweighted matching.
  • It relaxes the non-negativity constraint on the target measure, enabling negative reweighting and providing geometric intuition that negative weighting helps repel the policy from suboptimal actions.
  • Empirical results show SiMPO can outperform existing diffusion RL methods by flexibly choosing reweighting schemes tailored to the reward landscape, with practical guidelines for method selection.

Abstract

A commonly used family of RL algorithms for diffusion policies conducts softmax reweighting over the behavior policy, which usually induces an over-greedy policy and fails to leverage feedback from negative samples. In this work, we introduce Signed Measure Policy Optimization (SiMPO), a simple and unified framework that generalizes reweighting scheme in diffusion RL with general monotonic functions. SiMPO revisits diffusion RL via a two-stage measure matching lens. First, we construct a virtual target policy by f-divergence regularized policy optimization, where we can relax the non-negativity constraint to allow for a signed target measure. Second, we use this signed measure to guide diffusion or flow models through reweighted matching. This formulation offers two key advantages: a) it generalizes to arbitrary monotonically increasing weighting functions; and b) it provides a principled justification and practical guidance for negative reweighting. Furthermore, we provide geometric interpretations to illustrate how negative reweighting actively repels the policy from suboptimal actions. Extensive empirical evaluations demonstrate that SiMPO achieves superior performance by leveraging these flexible weighting schemes, and we provide practical guidelines for selecting reweighting methods tailored to the reward landscape.