AIM: Asymmetric Information Masking for Visual Question Answering Continual Learning

arXiv cs.CL / 4/17/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that continual visual question answering (VQA) methods designed for symmetric, unimodal models fail for modern vision-language models (VLMs) because their trainable parts are inherently asymmetric.
  • It explains that this asymmetry makes standard global regularization overly optimize the large language decoder, while leaving smaller but crucial visual projection layers more exposed to interference and thus more prone to catastrophic forgetting.
  • The proposed method, Asymmetric Information Masking (AIM), improves stability-plasticity trade-offs by applying modality-specific, targeted masks based on sensitivity to better protect vulnerable components.
  • Experiments on VQA v2 and GQA in continual VQA settings show AIM delivers state-of-the-art average performance and reduced average forgetting, and better retains compositional generalization to new skill-concept combinations.

Abstract

In continual visual question answering (VQA), existing Continual Learning (CL) methods are mostly built for symmetric, unimodal architectures. However, modern Vision-Language Models (VLMs) violate this assumption, as their trainable components are inherently asymmetric. This structural mismatch renders VLMs highly prone to catastrophic forgetting when learning from continuous data streams. Specifically, the asymmetry causes standard global regularization to favor the massive language decoder during optimization, leaving the smaller but critical visual projection layers highly vulnerable to interference. Consequently, this localized degradation leads to a severe loss of compositional reasoning capabilities. To address this, we propose Asymmetric Information Masking (AIM), which balances stability and plasticity by applying targeted masks based on modality-specific sensitivity. Experiments on VQA v2 and GQA under continual VQA settings show that AIM achieves state-of-the-art performance in both Average Performance (AP) and Average Forgetting (AF), while better preserving generalization to novel skill-concept compositions.