EasyVideoR1: Easier RL for Video Understanding

arXiv cs.CV / 4/21/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • The paper introduces EasyVideoR1, a reinforcement-learning framework tailored to train large vision-language models for video understanding using RLVR-style verifiable rewards.
  • EasyVideoR1 includes an efficient end-to-end video RL training pipeline that uses offline preprocessing and tensor caching to avoid repeated video decoding, improving throughput by about 1.47×.
  • It proposes a unified, task-aware reward system spanning 11 distinct video/image problem types, along with a mixed offline-online training strategy that combines curated trajectories with on-policy exploration.
  • The framework supports joint image-video training with independently configurable pixel budgets, enabling mutual reinforcement between the two modalities.
  • An asynchronous evaluation setup runs across 22 mainstream video understanding benchmarks and reports results that closely match official accuracy scores, addressing reproducibility challenges.

Abstract

Reinforcement learning from verifiable rewards (RLVR) has demonstrated remarkable effectiveness in improving the reasoning capabilities of large language models. As models evolve into natively multimodal architectures, extending RLVR to video understanding becomes increasingly important yet remains largely unexplored, due to the diversity of video task types, the computational overhead of repeatedly decoding and preprocessing high-dimensional visual inputs, and the difficulty of reproducible evaluation across numerous sensitive hyperparameters. Existing open-source RL training frameworks provide solid infrastructure for text and image scenarios but lack systematic optimizations tailored for video modality. In this work, we present \textbf{EasyVideoR1}, a complete and efficient reinforcement learning framework specifically designed for training large vision-language models on video understanding tasks. EasyVideoR1 makes the following contributions: (1) a full video RL training pipeline with offline preprocessing and tensor caching that eliminates redundant video decoding and yields a 1.47 \times throughput improvement; (2) a comprehensive, task-aware reward system covering 11 distinct video and image problem types with unified routing and modular extension; (3) a mixed offline-online data training paradigm that combines curated high-quality trajectories with on-policy exploration, benefiting the learning of more challenging tasks; (4) joint image-video training with independently configurable pixel budgets, allowing the two modalities to mutually reinforce each other; and (5) an asynchronous multi-benchmark evaluation framework covering 22 mainstream video understanding benchmarks, with reproduced accuracy closely aligned with officially reported scores.

EasyVideoR1: Easier RL for Video Understanding | AI Navigate