Learning to Credit the Right Steps: Objective-aware Process Optimization for Visual Generation

arXiv cs.CV / 4/22/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that current GRPO-style reinforcement learning for visual generative models suffers from coarse reward credit assignment, especially when multiple objectives (quality, motion consistency, text alignment) are involved.
  • Existing pipelines often merge multiple reward models into a single static scalar and propagate that signal uniformly across all diffusion timesteps, ignoring how different denoising steps contribute differently.
  • It proposes Objective-aware Trajectory Credit Assignment (OTCA), which decomposes credit across denoising steps and adaptively allocates/combines multiple reward signals over the diffusion trajectory.
  • By modeling both temporal (timestep-level) and objective-level credit, OTCA turns coarse preference supervision into timestep-aware training signals aligned with the iterative diffusion process.
  • Experiments reported in the paper indicate OTCA improves both image and video generation quality across evaluation metrics.

Abstract

Reinforcement learning, particularly Group Relative Policy Optimization (GRPO), has emerged as an effective framework for post-training visual generative models with human preference signals. However, its effectiveness is fundamentally limited by coarse reward credit assignment. In modern visual generation, multiple reward models are often used to capture heterogeneous objectives, such as visual quality, motion consistency, and text alignment. Existing GRPO pipelines typically collapse these rewards into a single static scalar and propagate it uniformly across the entire diffusion trajectory. This design ignores the stage-specific roles of different denoising steps and produces mistimed or incompatible optimization signals. To address this issue, we propose Objective-aware Trajectory Credit Assignment (OTCA), a structured framework for fine-grained GRPO training. OTCA consists of two key components. Trajectory-Level Credit Decomposition estimates the relative importance of different denoising steps. Multi-Objective Credit Allocation adaptively weights and combines multiple reward signals throughout the denoising process. By jointly modeling temporal credit and objective-level credit, OTCA converts coarse reward supervision into a structured, timestep-aware training signal that better matches the iterative nature of diffusion-based generation. Extensive experiments show that OTCA consistently improves both image and video generation quality across evaluation metrics.