SVSR: A Self-Verification and Self-Rectification Paradigm for Multimodal Reasoning

arXiv cs.AI / 4/14/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces SVSR, a framework that explicitly adds self-verification and self-rectification steps into multimodal models’ reasoning pipelines to reduce errors from shallow or inconsistent reasoning.
  • SVSR uses a three-stage training approach: building a high-quality preference dataset from refined reasoning traces (including forward/backward reasoning signals), cold-start supervised fine-tuning for structured multi-step reasoning, and Semi-online DPO that periodically augments training data with teacher-filtered model-generated traces.
  • Experiments across multiple multimodal and visual reasoning benchmarks reportedly improve accuracy, robustness, and generalization to unseen tasks and question types.
  • The authors also claim that models trained with explicit self-reflective reasoning develop stronger implicit reasoning capabilities, improving performance even when explicit reasoning traces are not provided.

Abstract

Current multimodal models often suffer from shallow reasoning, leading to errors caused by incomplete or inconsistent thought processes. To address this limitation, we propose Self-Verification and Self-Rectification (SVSR), a unified framework that explicitly integrates self-verification and self-rectification into the model's reasoning pipeline, substantially improving robustness and reliability in complex visual understanding and multimodal reasoning tasks. SVSR is built on a novel three-stage training paradigm. First, we construct a high-quality unified preference dataset by refining reasoning traces from pre-trained vision-language models, incorporating both forward and backward reasoning to embed self-reflective signals. Second, we perform cold-start supervised fine-tuning on this dataset to learn structured, multi-step reasoning behaviors. Third, we apply a Semi-online Direct Preference Optimization (Semi-online DPO) process, continuously augmenting the training corpus with high-quality, model-generated reasoning traces filtered by a powerful teacher VLM. This pipeline enables the model to learn, elicit, and refine its ability to self-verify and self-rectify. Extensive experiments across diverse benchmarks demonstrate that SVSR improves reasoning accuracy and enables stronger generalization to unseen tasks and question types. Notably, once trained with explicit self-reflective reasoning, the model also exhibits improved implicit reasoning ability, outperforming strong baselines even when no explicit reasoning traces are provided. These results highlight the potential of SVSR for building more dependable, introspective, and cognitively aligned multimodal systems.