Model Merging via Data-Free Covariance Estimation

arXiv cs.LG / 4/3/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents a principled, layer-wise model merging approach framed as minimizing interference between tasks, aiming to connect model merging to more theoretically grounded objectives.
  • It addresses the common limitation that estimating per-layer covariance matrices usually requires auxiliary data by proposing a data-free method that estimates covariances from “difference matrices” instead.
  • The authors claim the new covariance-estimation strategy both removes the need for external data and lowers computational cost relative to prior data-dependent formulations.
  • Experiments on vision and language benchmarks with model sizes from 86M to 7B parameters show improved performance over existing data-free state-of-the-art model merging methods.
  • The work revisits and strengthens an interference-minimization framework by specifying conditions under which the data-free covariance estimation is valid, making the method more practically deployable.

Abstract

Model merging provides a way of cheaply combining individual models to produce a model that inherits each individual's capabilities. While some merging methods can approach the performance of multitask training, they are often heuristically motivated and lack theoretical justification. A principled alternative is to pose model merging as a layer-wise optimization problem that directly minimizes interference between tasks. However, this formulation requires estimating per-layer covariance matrices from data, which may not be available when performing merging. In contrast, many of the heuristically-motivated methods do not require auxiliary data, making them practically advantageous. In this work, we revisit the interference minimization framework and show that, under certain conditions, covariance matrices can be estimated directly from difference matrices, eliminating the need for data while also reducing computational costs. We validate our approach across vision and language benchmarks on models ranging from 86M parameters to 7B parameters, outperforming previous data-free state-of-the-art merging methods