AI Navigate

Insight-V++: Towards Advanced Long-Chain Visual Reasoning with Multimodal Large Language Models

arXiv cs.CV / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Insight-V++ presents a unified multi-agent visual reasoning framework that evolves from Insight-V into a spatial-temporal architecture designed for long-horizon reasoning in multimodal LLMs.
  • The framework uses a dual-agent setup with a reasoning agent that constructs extensive analytical chains and a summary agent that critically evaluates and distills the final outcomes.
  • It introduces two new algorithms, ST-GRPO and J-GRPO, to enhance spatial-temporal reasoning and robustness, enabling a self-improving loop through reliable feedback from the summary agent.
  • A scalable data generation pipeline autonomously creates complex reasoning trajectories across image and video domains without human labeling, and experiments on base models like LLaVA-NeXT and Qwen2.5-VL show significant performance gains while preserving traditional perception tasks.

Abstract

Large Language Models (LLMs) have achieved remarkable reliability and advanced capabilities through extended test-time reasoning. However, extending these capabilities to Multi-modal Large Language Models (MLLMs) remains a significant challenge due to a critical scarcity of high-quality, long-chain reasoning data and optimized training pipelines. To bridge this gap, we present a unified multi-agent visual reasoning framework that systematically evolves from our foundational image-centric model, Insight-V, into a generalized spatial-temporal architecture, Insight-V++. We first propose a scalable data generation pipeline equipped with multi-granularity assessment that autonomously synthesizes structured, complex reasoning trajectories across image and video domains without human intervention. Recognizing that directly supervising MLLMs with such intricate data yields sub-optimal results, we design a dual-agent architecture comprising a reasoning agent to execute extensive analytical chains, and a summary agent to critically evaluate and distill final outcomes. While our initial framework utilized Direct Preference Optimization (DPO), its off-policy nature fundamentally constrained reinforcement learning potential. To overcome these limitations, particularly for long-horizon video understanding, Insight-V++ introduces two novel algorithms, ST-GRPO and J-GRPO, which enhance spatial-temporal reasoning and improve evaluative robustness. Crucially, by leveraging reliable feedback from the summary agent, we guide an iterative reasoning path generation process, retraining the entire multi-agent system in a continuous, self-improving loop. Extensive experiments on base models like LLaVA-NeXT and Qwen2.5-VL demonstrate significant performance gains across challenging image and video reasoning benchmarks while preserving strong capabilities on traditional perception-focused tasks.