AI Navigate

Sparse-Dense Mixture of Experts Adapter for Multi-Modal Tracking

arXiv cs.CV / 3/17/2026

📰 NewsModels & Research

Key Points

  • The paper introduces Sparse-Dense Mixture of Experts Adapter (SDMoEA) for parameter-efficient fine-tuning in multi-modal tracking, addressing cross-modal heterogeneity under a unified model.
  • It features an SDMoE module with a sparse MoE to capture modality-specific information and a dense-shared MoE for cross-modal information.
  • A Gram-based Semantic Alignment Hypergraph Fusion (GSAHF) module is proposed to align semantics across modalities using Gram matrices and enable high-order fusion.
  • Experiments on benchmarks such as LasHeR, RGBT234, VTUAV, VisEvent, COESOT, DepthTrack, and VOT-RGBD2022 show superior performance compared with other PEFT approaches.

Abstract

Parameter-efficient fine-tuning (PEFT) techniques, such as prompts and adapters, are widely used in multi-modal tracking because they alleviate issues of full-model fine-tuning, including time inefficiency, high resource consumption, parameter storage burden, and catastrophic forgetting. However, due to cross-modal heterogeneity, most existing PEFT-based methods struggle to effectively represent multi-modal features within a unified framework with shared parameters. To address this problem, we propose a novel Sparse-Dense Mixture of Experts Adapter (SDMoEA) framework for PEFT-based multi-modal tracking under a unified model structure. Specifically, we design an SDMoE module as the multi-modal adapter to model modality-specific and shared information efficiently. SDMoE consists of a sparse MoE and a dense-shared MoE: the former captures modality-specific information, while the latter models shared cross-modal information. Furthermore, to overcome limitations of existing tracking methods in modeling high-order correlations during multi-level multi-modal fusion, we introduce a Gram-based Semantic Alignment Hypergraph Fusion (GSAHF) module. It first employs Gram matrices for cross-modal semantic alignment, ensuring that the constructed hypergraph accurately reflects semantic similarity and high-order dependencies between modalities. The aligned features are then integrated into the hypergraph structure to exploit its ability to model high-order relationships, enabling deep fusion of multi-level multi-modal information. Extensive experiments demonstrate that the proposed method achieves superior performance compared with other PEFT approaches on several multi-modal tracking benchmarks, including LasHeR, RGBT234, VTUAV, VisEvent, COESOT, DepthTrack, and VOT-RGBD2022.