SEATrack: Simple, Efficient, and Adaptive Multimodal Tracker

arXiv cs.CV / 4/15/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces SEATrack, a two-stream multimodal tracker designed to improve the balance between tracking performance and parameter efficiency compared with recent PEFT approaches that often increase model size.
  • It argues that misaligned cross-modal matching attention is a key driver of the performance–efficiency trade-off, and addresses this with AMG-LoRA, combining Low-Rank Adaptation (LoRA) and Adaptive Mutual Guidance (AMG) to refine and align attention across modalities.
  • For cross-modal fusion, SEATrack replaces local fusion with a Hierarchical Mixture of Experts (HMoE) to capture global relations while keeping computation efficient.
  • Experiments reportedly show improved performance over state-of-the-art methods on RGB–T, RGB–D, and RGB–E tracking tasks while maintaining the efficiency goals of parameter-efficient fine-tuning, and the authors provide released code.

Abstract

Parameter-efficient fine-tuning (PEFT) in multimodal tracking reveals a concerning trend where recent performance gains are often achieved at the cost of inflated parameter budgets, which fundamentally erodes PEFT's efficiency promise. In this work, we introduce SEATrack, a Simple, Efficient, and Adaptive two-stream multimodal tracker that tackles this performance-efficiency dilemma from two complementary perspectives. We first prioritize cross-modal alignment of matching responses, an underexplored yet pivotal factor that we argue is essential for breaking the trade-off. Specifically, we observe that modality-specific biases in existing two-stream methods generate conflicting matching attention maps, thereby hindering effective joint representation learning. To mitigate this, we propose AMG-LoRA, which seamlessly integrates Low-Rank Adaptation (LoRA) for domain adaptation with Adaptive Mutual Guidance (AMG) to dynamically refine and align attention maps across modalities. We then depart from conventional local fusion approaches by introducing a Hierarchical Mixture of Experts (HMoE) that enables efficient global relation modeling, effectively balancing expressiveness and computational efficiency in cross-modal fusion. Equipped with these innovations, SEATrack advances notable progress over state-of-the-art methods in balancing performance with efficiency across RGB-T, RGB-D, and RGB-E tracking tasks. \href{https://github.com/AutoLab-SAI-SJTU/SEATrack}{\textcolor{cyan}{Code is available}}.