A Step Toward Federated Pretraining of Multimodal Large Language Models

arXiv cs.LG / 3/31/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that multimodal LLM pre-training is limited by saturated public data and proposes using federated learning to leverage privacy-preserving multimodal data silos.
  • It introduces the Federated MLLM Alignment (Fed-MA) task, freezing the vision encoder and LLM while only collaboratively training the cross-modal projector during a lightweight pre-training stage.
  • The authors identify two key issues for federated pre-training—parameter interference when aggregating local projectors and gradient oscillations under one-pass collaborative SGD.
  • To address these, they propose Fed-CMP, using Canonical Reliability-Aware Aggregation to fuse decomposed client projectors via a shared alignment basis with reliability weighting, and Orthogonality-Preserved Momentum to stabilize optimization while preserving geometric structure.
  • Experiments across four federated pre-training scenarios using public datasets show Fed-CMP significantly outperforms existing federated pre-training baselines.

Abstract

The rapid evolution of Multimodal Large Language Models (MLLMs) is bottlenecked by the saturation of high-quality public data, while vast amounts of diverse multimodal data remain inaccessible in privacy-sensitive silos. Federated Learning (FL) offers a promising solution to unlock these distributed resources, but existing research focuses predominantly on fine-tuning, leaving the foundational pre-training phase largely unexplored. In this paper, we formally introduce the Federated MLLM Alignment (Fed-MA) task, a lightweight pre-training paradigm that freezes the vision encoder and LLM while collaboratively training the cross-modal projector. We identify two critical challenges in this setting: (i) parameter interference in aggregating local projectors; and (ii) gradient oscillations in one-pass collaborative SGD. To address these challenges, we propose Fed-CMP, a pioneering framework for federated MLLM pre-training. Fed-CMP employs Canonical Reliability-Aware Aggregation, which constructs a canonical space to decompose client projectors into a shared alignment basis and client-specific coefficients, then performs reliability-weighted fusion to suppress parameter interference. Furthermore, Fed-CMP introduces Orthogonality-Preserved Momentum, which applies momentum to the shared alignment basis via orthogonal projection, accumulating historical optimization directions while preserving geometric structure. We construct four federated pre-training scenarios based on public datasets, and extensive experiments validate that Fed-CMP significantly outperforms existing baselines.