Training-Free Dynamic Upcycling of Expert Language Models

arXiv cs.CL / 4/1/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Dynamic Upcycling MoE (DUME), a training-free method that reuses already-trained dense expert language models from different domains to build a single Mixture-of-Experts system.
  • DUME avoids expensive multitask fine-tuning by using a closed-form ridge regression solution, eliminating further optimization during construction and enabling experts to be added dynamically.
  • The authors report strong empirical results: in causal language modeling, DUME retains up to 97.6% of a domain-specialized dense expert’s performance, and in reasoning it can reach 102.1% of the dense expert’s performance.
  • The work suggests the constructed MoE can later be fine-tuned for additional gains, while still being cost-efficient and scalable compared with conventional expertise finetuning approaches.
  • The research code is released publicly, supporting reproducibility and experimentation by others in the community.

Abstract

Large Language Models (LLMs) have achieved remarkable performance on a wide range of specialized tasks, exhibiting strong problem-solving capabilities. However, training these models is prohibitively expensive, and they often lack domain-specific expertise because they rely on general knowledge datasets. Expertise finetuning can address this issue; however, it often leads to overspecialization, and developing a single multi-domain expert remains difficult due to diverging objectives. Furthermore, multitask training is challenging due to interference and catastrophic forgetting. Existing work proposes combining the expertise of dense models within a Mixture of Experts (MoE) architecture, although this approach still requires multitask finetuning. To address these issues, we introduce Dynamic Upcycling MoE (DUME), a novel approach that reuses dense experts trained on different domains to construct a unified MoE model. Our method builds a single multitask model that preserves the capabilities of the original dense experts without requiring additional training. DUME is both cost-efficient and scalable: by leveraging the closed-form solution of ridge regression, it eliminates the need for further optimization and enables experts to be added dynamically while maintaining the model's original performance. We demonstrate that DUME consistently outperforms baseline approaches in both causal language modeling and reasoning settings. Finally, we also show that the DUME model can be fine-tuned to further improve performance. We show that, in the causal language modeling setting, DUME can retain up to 97.6% of a dense expert model specialized in one particular domain, and that it can also surpass it in the reasoning setting, where it can achieve 102.1% of the dense expert performance. Our code is available at: github.com/gensyn-ai/dume.