Evolutionary Negative Module Pruning for Better LoRA Merging

arXiv cs.AI / 4/21/2026

📰 NewsModels & Research

Key Points

  • The paper identifies “negative modules” in LoRA layers—specific adapters that can reduce overall model performance when merged with others.
  • It proposes ENMP (Evolutionary Negative Module Pruning), a plug-and-play pruning approach that removes these harmful LoRA modules before merging.
  • ENMP uses an evolutionary search strategy to handle the discrete, non-differentiable nature of selecting which modules to prune.
  • Experiments show ENMP improves the performance of existing LoRA merging methods and reports new state-of-the-art results across both language and vision tasks.
  • The authors provide a public code repository to support adoption and further experimentation (ENMP-LoRAMerging).

Abstract

Merging multiple Low-Rank Adaptation (LoRA) experts into a single backbone is a promising approach for efficient multi-task deployment. While existing methods strive to alleviate interference via weight interpolation or subspace alignment, they rest upon the implicit assumption that all LoRA matrices contribute constructively to the merged model. In this paper, we uncover a critical bottleneck in current merging paradigms: the existence of \textit{negative modules} -- specific LoRA layers that inherently degrade global performance upon merging. We propose \textbf{E}volutionary \textbf{N}egative \textbf{M}odule \textbf{P}runing (\textbf{ENMP}), a plug-and-play LoRA pruning method to locate and exclude these detrimental modules prior to merging. By leveraging an evolutionary search strategy, ENMP effectively navigates the discrete, non-differentiable landscape of module selection to identify optimal pruning configurations. Extensive evaluations demonstrate that ENMP consistently boosts the performance of existing merging algorithms, achieving a new state-of-the-art across both language and vision domains. Code is available at https://github.com/CaoAnda/ENMP-LoRAMerging.