PivotMerge: Bridging Heterogeneous Multimodal Pre-training via Post-Alignment Model Merging

arXiv cs.CV / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes PivotMerge to bridge heterogeneous multimodal pre-training by merging MLLM components specifically at the “post-alignment” stage, rather than only after fine-tuning.
  • It frames multimodal pre-training as the problem of building cross-modal alignment between visual and textual representations, motivating the new post-alignment merging task.
  • PivotMerge addresses two main issues in merging heterogeneous models: cross-domain parameter interference and uneven layer-wise alignment contribution across layers/projectors.
  • The method uses Shared-space Decomposition and Filtering to separate shared alignment from domain-specific differences and suppress conflicting update directions.
  • Experiments on CC12M-based post-alignment merging scenarios across multiple multimodal benchmarks show PivotMerge consistently outperforms prior baselines, indicating strong performance and generalization.

Abstract

Multimodal Large Language Models (MLLMs) rely on multimodal pre-training over diverse data sources, where different datasets often induce complementary cross-modal alignment capabilities. Model merging provides a cost-effective mechanism for integrating multiple expert MLLMs with complementary strengths into a unified model. However, existing model merging research mainly focuses on post-finetuning scenarios, leaving the pre-training stage largely unexplored. We argue that the core of MLLM pre-training lies in establishing effective cross-modal alignment, which bridges visual and textual representations into a unified semantic space. Motivated by this insight, we introduce the post-alignment merging task, which aims to integrate cross-modal alignment capabilities learned from heterogeneous multimodal pre-training. This setting introduces two key challenges: cross-domain parameter interference, where parameter updates learned from different data distributions conflict during merging, and layer-wise alignment contribution disparity, where different layers and projectors contribute unevenly to cross-modal alignment. To address them, we propose \textbf{PivotMerge}, a post-alignment merging framework for cross-modal projectors. PivotMerge incorporates two key components: Shared-space Decomposition and Filtering, which disentangles shared alignment patterns from domain-specific variations and suppresses conflicting directions, and Alignment-guided Layer-wise Merging, which assigns layer-specific merging weights based on differing alignment contributions. We construct systematic CC12M-based post-alignment merging scenarios for evaluation. Extensive experiments on multiple multimodal benchmarks show that PivotMerge consistently outperforms existing baselines, demonstrating its effectiveness and generalization ability.