On Token's Dilemma: Dynamic MoE with Drift-Aware Token Assignment for Continual Learning of Large Vision Language Models

arXiv cs.LG / 3/31/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies why MoE-based multimodal continual instruction tuning for large vision-language models still forgets prior knowledge, attributing the core issue to “routing-drift” where old-task tokens get misrouted to newly added experts.
  • It identifies a token-level failure mode (“token’s dilemma”): ambiguous or old tokens in new-task data provide little learning benefit but can trigger forgetting because their routing assignments become unstable during training.
  • To address this, the authors propose LLaVA-DyMoE, a dynamic MoE framework that incrementally expands experts while using drift-aware, token-level assignment guidance and routing-score regularization to preserve expert-group separation.
  • Experiments on continual instruction tuning show the method reduces forgetting (reported as a ~12% reduction) and improves mean final accuracy by over 7% versus baseline approaches.
  • The work accompanies an online project page for accessing the DyMoE resources.

Abstract

Multimodal Continual Instruction Tuning aims to continually enhance Large Vision Language Models (LVLMs) by learning from new data without forgetting previously acquired knowledge. Mixture of Experts (MoE) architectures naturally facilitate this by incrementally adding new experts and expanding routers while keeping the existing ones frozen. However, despite expert isolation, MoE-based continual learners still suffer from forgetting due to routing-drift: old-task tokens become mistakenly attracted to newly added experts, degrading performance on prior tasks. We analyze the failure mode at the token level and reveal the token's dilemma: ambiguous and old tokens in new-task data offer minimal learning benefit yet induce forgetting when routed to new experts, due to their ambiguous routing assignment during training. Motivated by this, we propose LLaVA-DyMoE, a dynamic MoE framework that incrementally expands the MoE with drift-aware token assignment. We characterize token types via their routing score distributions and apply targeted regularization. Specifically, a token-level assignment guidance steers ambiguous and old tokens away from new experts to preserve established routing patterns and alleviate routing-drift, while complementary routing score regularizations enforce expert-group separation and promote new-expert specialization. Extensive experiments demonstrate that our LLaVA-DyMoE effectively mitigates routing-drift-induced forgetting, achieving over a 7% gain in mean final accuracy and a 12% reduction in forgetting compared to baselines. The project page is https://zhaoc5.github.io/DyMoE.