AI Navigate

MM-Zero: ゼロデータから自己進化するマルチモデル視覚言語モデル

arXiv cs.CV / 2026/3/11

Signals & Early TrendsIdeas & Deep AnalysisModels & Research

要点

  • MM-Zeroは、初期の視覚データを必要とせず、ゼロデータから視覚言語モデル(VLM)が自己改善できるように設計された新しい自己進化フレームワークです。
  • 本フレームワークでは、視覚的概念や質問を生成するProposer、これらを実行可能なコードに変換して画像を生成するCoder、生成された視覚情報に基づいてマルチモーダル推論を行うSolverの3つの役割からなるマルチロールトレーニング設定を導入しています。
  • すべての役割は共通の基盤モデルに基づき、Group Relative Policy Optimization(GRPO)と実行結果のフィードバックおよび難易度調整を組み込んだ報酬機構を用いて訓練されています。
  • 実験により、MM-Zeroは多様なマルチモーダルベンチマークにおいてVLMの推論性能を大幅に向上させ、従来の二役割モデルを超える自己進化AIシステムの限界を押し広げることが示されました。
  • この手法は、人間の介入を最小化しつつ高度なマルチモーダルAIモデルを開発可能なスケーラブルなマルチモデル自己改善システムへの道を開きます。

Computer Science > Computer Vision and Pattern Recognition

arXiv:2603.09206 (cs)
[Submitted on 10 Mar 2026]

Title:MM-Zero: Self-Evolving Multi-Model Vision Language Models From Zero Data

View a PDF of the paper titled MM-Zero: Self-Evolving Multi-Model Vision Language Models From Zero Data, by Zongxia Li and 10 other authors
View PDF HTML (experimental)
Abstract:Self-evolving has emerged as a key paradigm for improving foundational models such as Large Language Models (LLMs) and Vision Language Models (VLMs) with minimal human intervention. While recent approaches have demonstrated that LLM agents can self-evolve from scratch with little to no data, VLMs introduce an additional visual modality that typically requires at least some seed data, such as images, to bootstrap the self-evolution process. In this work, we present Multi-model Multimodal Zero (MM-Zero), the first RL-based framework to achieve zero-data self-evolution for VLM reasoning. Moving beyond prior dual-role (Proposer and Solver) setups, MM-Zero introduces a multi-role self-evolving training framework comprising three specialized roles: a Proposer that generates abstract visual concepts and formulates questions; a Coder that translates these concepts into executable code (e.g., Python, SVG) to render visual images; and a Solver that performs multimodal reasoning over the generated visual content. All three roles are initialized from the same base model and trained using Group Relative Policy Optimization (GRPO), with carefully designed reward mechanisms that integrate execution feedback, visual verification, and difficulty balancing. Our experiments show that MM-Zero improves VLM reasoning performance across a wide range of multimodal benchmarks. MM-Zero establishes a scalable path toward self-evolving multi-model systems for multimodal models, extending the frontier of self-improvement beyond the conventional two-model paradigm.
Subjects: Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
Cite as: arXiv:2603.09206 [cs.CV]
  (or arXiv:2603.09206v1 [cs.CV] for this version)
  https://doi.org/10.48550/arXiv.2603.09206
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Xiyang Wu [view email]
[v1] Tue, 10 Mar 2026 05:23:26 UTC (438 KB)
Full-text links:

Access Paper:

Current browse context:
cs.CV
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.