AI Navigate

タスクレベルのモデルマージ崩壊に関する実証研究と理論的説明

arXiv cs.AI / 2026/3/11

Ideas & Deep AnalysisModels & Research

要点

  • 本論文は、異なるタスクで独立にファインチューニングされた大規模言語モデル(LLM)をマージする際に発生する失敗モードであるタスクレベルのモデルマージ崩壊を研究するものです。この現象は、マージしたモデルの性能が破滅的に低下することを指します。
  • 著者らは、タスク間の表現の非互換性がマージ崩壊と強く相関する一方で、従来のパラメータ空間の衝突指標にはほとんど相関が見られないことを発見しました。
  • 広範な実験と統計解析を通じて、この現象が複数のマージ手法において共通して観察されることを示し、モデルマージ研究における一般的な前提を覆しています。
  • 次元依存の境界を持つレート・ディストーション理論に基づく理論的説明を提供し、タスクのマージ可能性に関する基本的な限界を確立しています。
  • 本研究は、専門特化モデルのマージにおける制約と課題の理解を深め、再訓練を伴わない今後のモデル再利用と統合戦略の設計に役立つ知見を提供します。

Computer Science > Artificial Intelligence

arXiv:2603.09463 (cs)
[Submitted on 10 Mar 2026]

Title:An Empirical Study and Theoretical Explanation on Task-Level Model-Merging Collapse

View a PDF of the paper titled An Empirical Study and Theoretical Explanation on Task-Level Model-Merging Collapse, by Yuan Cao and 7 other authors
View PDF HTML (experimental)
Abstract:Model merging unifies independently fine-tuned LLMs from the same base, enabling reuse and integration of parallel development efforts without retraining. However, in practice we observe that merging does not always succeed: certain combinations of task-specialist models suffer from catastrophic performance degradation after merging. We refer to this failure mode as merging collapse. Intuitively, collapse arises when the learned representations or parameter adjustments for different tasks are fundamentally incompatible, so that merging forces destructive interference rather than synergy. In this paper, we identify and characterize the phenomenon of task-level merging collapse, where certain task combinations consistently trigger huge performance degradation across all merging methods. Through extensive experiments and statistical analysis, we demonstrate that representational incompatibility between tasks is strongly correlated with merging collapse, while parameter-space conflict metrics show minimal correlation, challenging conventional wisdom in model merging literature. We provide a theoretical explanation on this phenomenon through rate-distortion theory with a dimension-dependent bound, establishing fundamental limits on task mergeability regardless of methodology.
Subjects: Artificial Intelligence (cs.AI)
Cite as: arXiv:2603.09463 [cs.AI]
  (or arXiv:2603.09463v1 [cs.AI] for this version)
  https://doi.org/10.48550/arXiv.2603.09463
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Yuan Cao [view email]
[v1] Tue, 10 Mar 2026 10:18:32 UTC (84 KB)
Full-text links:

Access Paper:

Current browse context:
cs.AI
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.