AI Navigate

MM-Zero: Self-Evolving Multi-Model Vision Language Models From Zero Data

arXiv cs.CV / 3/11/2026

Signals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • MM-Zero is a novel self-evolving framework designed to enable Vision Language Models (VLMs) to improve from zero data without requiring initial visual inputs.
  • The framework introduces a multi-role training setup involving a Proposer generating visual concepts and questions, a Coder converting these into executable code to render images, and a Solver performing multimodal reasoning on the generated visuals.
  • All roles are based on a shared foundational model and trained using Group Relative Policy Optimization (GRPO) with reward mechanisms that integrate execution feedback and difficulty balancing.
  • Experiments demonstrate that MM-Zero significantly enhances VLM reasoning performance across diverse multimodal benchmarks, pushing the boundaries of self-evolving AI systems beyond previous dual-role models.
  • This approach paves the way for scalable, multi-model self-improvement systems that minimize human intervention in developing advanced multimodal AI models.

Computer Science > Computer Vision and Pattern Recognition

arXiv:2603.09206 (cs)
[Submitted on 10 Mar 2026]

Title:MM-Zero: Self-Evolving Multi-Model Vision Language Models From Zero Data

View a PDF of the paper titled MM-Zero: Self-Evolving Multi-Model Vision Language Models From Zero Data, by Zongxia Li and 10 other authors
View PDF HTML (experimental)
Abstract:Self-evolving has emerged as a key paradigm for improving foundational models such as Large Language Models (LLMs) and Vision Language Models (VLMs) with minimal human intervention. While recent approaches have demonstrated that LLM agents can self-evolve from scratch with little to no data, VLMs introduce an additional visual modality that typically requires at least some seed data, such as images, to bootstrap the self-evolution process. In this work, we present Multi-model Multimodal Zero (MM-Zero), the first RL-based framework to achieve zero-data self-evolution for VLM reasoning. Moving beyond prior dual-role (Proposer and Solver) setups, MM-Zero introduces a multi-role self-evolving training framework comprising three specialized roles: a Proposer that generates abstract visual concepts and formulates questions; a Coder that translates these concepts into executable code (e.g., Python, SVG) to render visual images; and a Solver that performs multimodal reasoning over the generated visual content. All three roles are initialized from the same base model and trained using Group Relative Policy Optimization (GRPO), with carefully designed reward mechanisms that integrate execution feedback, visual verification, and difficulty balancing. Our experiments show that MM-Zero improves VLM reasoning performance across a wide range of multimodal benchmarks. MM-Zero establishes a scalable path toward self-evolving multi-model systems for multimodal models, extending the frontier of self-improvement beyond the conventional two-model paradigm.
Subjects: Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
Cite as: arXiv:2603.09206 [cs.CV]
  (or arXiv:2603.09206v1 [cs.CV] for this version)
  https://doi.org/10.48550/arXiv.2603.09206
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Xiyang Wu [view email]
[v1] Tue, 10 Mar 2026 05:23:26 UTC (438 KB)
Full-text links:

Access Paper:

Current browse context:
cs.CV
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.