Discrete Flow Matching Policy Optimization

arXiv cs.LG / 4/9/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Discrete flow Matching policy Optimization (DoMinO), a unified framework for reinforcement learning fine-tuning of Discrete Flow Matching (DFM) models using policy gradient methods.
  • It reframes DFM sampling as a multi-step Markov Decision Process, turning reward maximization for RL fine-tuning into a transparent and robust RL objective without relying on biased auxiliary estimators or likelihood surrogates.
  • To mitigate policy collapse during fine-tuning, DoMinO adds new total-variation regularizers that keep the fine-tuned distribution close to the pretrained one.
  • The authors provide theoretical error and regularizer bounds, including an upper bound on discretization error and tractable bounds for the regularization terms.
  • Experiments on regulatory DNA sequence design show improved predicted enhancer activity and better sequence naturalness versus prior reward-driven baselines, with regularization further improving alignment to natural sequence distributions.

Abstract

We introduce Discrete flow Matching policy Optimization (DoMinO), a unified framework for Reinforcement Learning (RL) fine-tuning Discrete Flow Matching (DFM) models under a broad class of policy gradient methods. Our key idea is to view the DFM sampling procedure as a multi-step Markov Decision Process. This perspective provides a simple and transparent reformulation of fine-tuning reward maximization as a robust RL objective. Consequently, it not only preserves the original DFM samplers but also avoids biased auxiliary estimators and likelihood surrogates used by many prior RL fine-tuning methods. To prevent policy collapse, we also introduce new total-variation regularizers to keep the fine-tuned distribution close to the pretrained one. Theoretically, we establish an upper bound on the discretization error of DoMinO and tractable upper bounds for the regularizers. Experimentally, we evaluate DoMinO on regulatory DNA sequence design. DoMinO achieves stronger predicted enhancer activity and better sequence naturalness than the previous best reward-driven baselines. The regularization further improves alignment with the natural sequence distribution while preserving strong functional performance. These results establish DoMinO as an useful framework for controllable discrete sequence generation.