AI Navigate

An Empirical Study and Theoretical Explanation on Task-Level Model-Merging Collapse

arXiv cs.AI / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • The paper studies task-level model merging collapse, a failure mode where merging independently fine-tuned LLMs on different tasks leads to catastrophic performance degradation.
  • The authors find that representational incompatibility between tasks strongly correlates with merging collapse, whereas traditional parameter-space conflict metrics do not.
  • Extensive experiments and statistical analyses demonstrate this phenomenon across multiple merging methods, challenging prevailing assumptions in model merging research.
  • A theoretical explanation grounded in rate-distortion theory with a dimension-dependent bound is provided, establishing fundamental limits on task mergeability.
  • This work enables better understanding of the constraints and challenges in merging specialist models, informing future model reuse and integration strategies without retraining.

Computer Science > Artificial Intelligence

arXiv:2603.09463 (cs)
[Submitted on 10 Mar 2026]

Title:An Empirical Study and Theoretical Explanation on Task-Level Model-Merging Collapse

View a PDF of the paper titled An Empirical Study and Theoretical Explanation on Task-Level Model-Merging Collapse, by Yuan Cao and 7 other authors
View PDF HTML (experimental)
Abstract:Model merging unifies independently fine-tuned LLMs from the same base, enabling reuse and integration of parallel development efforts without retraining. However, in practice we observe that merging does not always succeed: certain combinations of task-specialist models suffer from catastrophic performance degradation after merging. We refer to this failure mode as merging collapse. Intuitively, collapse arises when the learned representations or parameter adjustments for different tasks are fundamentally incompatible, so that merging forces destructive interference rather than synergy. In this paper, we identify and characterize the phenomenon of task-level merging collapse, where certain task combinations consistently trigger huge performance degradation across all merging methods. Through extensive experiments and statistical analysis, we demonstrate that representational incompatibility between tasks is strongly correlated with merging collapse, while parameter-space conflict metrics show minimal correlation, challenging conventional wisdom in model merging literature. We provide a theoretical explanation on this phenomenon through rate-distortion theory with a dimension-dependent bound, establishing fundamental limits on task mergeability regardless of methodology.
Subjects: Artificial Intelligence (cs.AI)
Cite as: arXiv:2603.09463 [cs.AI]
  (or arXiv:2603.09463v1 [cs.AI] for this version)
  https://doi.org/10.48550/arXiv.2603.09463
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Yuan Cao [view email]
[v1] Tue, 10 Mar 2026 10:18:32 UTC (84 KB)
Full-text links:

Access Paper:

Current browse context:
cs.AI
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.