CoVFT: Context-aware Visual Fine-tuning for Multimodal Large Language Models

arXiv cs.CV / 3/24/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates whether multimodal LLM vision encoders should be fine-tuned or kept frozen, noting that prior visual fine-tuning (VFT) approaches lack a unified and consistent conclusion across heterogeneous training setups.
  • Using a configuration-aligned benchmark, the authors show that existing VFT methods often do not reliably beat a frozen vision baseline across diverse multimodal tasks, attributing the instability to “visual preference conflicts” from context-agnostic vision encoders.
  • They introduce the Context-aware Visual Fine-tuning (CoVFT) framework, which conditions visual adaptation on multimodal context via a Context Vector Extraction (CVE) module and a Contextual Mixture-of-Experts (CoMoE) module.
  • Experiments across 12 multimodal benchmarks indicate CoVFT reaches state-of-the-art results while improving training stability compared with existing VFT methods.
  • A key finding is that fine-tuning a 7B MLLM with CoVFT can outperform the average performance of a 13B counterpart, suggesting substantial room for gains through better visual encoder optimization.

Abstract

Multimodal large language models (MLLMs) achieve remarkable progress in cross-modal perception and reasoning, yet a fundamental question remains unresolved: should the vision encoder be fine-tuned or frozen? Despite the success of models such as LLaVA and Qwen-VL, inconsistent design choices and heterogeneous training setups hinder a unified understanding of visual fine-tuning (VFT) in MLLMs. Through a configuration-aligned benchmark, we find that existing VFT methods fail to consistently outperform the frozen baseline across multimodal tasks. Our analysis suggests that this instability arises from visual preference conflicts, where the context-agnostic nature of vision encoders induces divergent parameter updates under diverse multimodal context. To address this issue, we propose the Context-aware Visual Fine-tuning (CoVFT) framework, which explicitly incorporates multimodal context into visual adaptation. By integrating a Context Vector Extraction (CVE) and a Contextual Mixture-of-Experts (CoMoE) module, CoVFT decomposes conflicting optimization signals and enables stable, context-sensitive visual updates. Extensive experiments on 12 multimodal benchmarks demonstrate that CoVFT achieves state-of-the-art performance with superior stability. Notably, fine-tuning a 7B MLLM with CoVFT surpasses the average performance of its 13B counterpart, revealing substantial untapped potential in visual encoder optimization within MLLMs.