CL-VISTA: Benchmarking Continual Learning in Video Large Language Models

arXiv cs.CV / 4/2/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces CL-VISTA, a new benchmark designed to evaluate continual learning performance in Video Large Language Models under realistic, non-stationary distribution shifts.
  • It argues existing benchmarks overstate results because they often split one dataset into sub-tasks, leading to high redundancy and artificially low forgetting, especially for large-scale pre-trained models.
  • CL-VISTA includes 8 diverse continual learning tasks across perception, understanding, and reasoning, intended to trigger substantial shifts that better reveal catastrophic forgetting.
  • The authors propose a broad evaluation framework with 6 protocols measuring performance (including general video understanding to detect task-specific overfitting), computational efficiency, and memory footprint.
  • Extensive tests of 10 mainstream continual learning methods show an inherent trade-off: approaches that reduce forgetting often hurt generalization or require impractical compute/memory costs.

Abstract

Video Large Language Models (Video-LLMs) require continual learning to adapt to non-stationary real-world data. However, existing benchmarks fall short of evaluating modern foundation models: many still rely on models without large-scale pre-training, and prevailing benchmarks typically partition a single dataset into sub-tasks, resulting in high task redundancy and negligible forgetting on pre-trained Video-LLMs. To address these limitations, we propose CL-VISTA, a benchmark tailored for continual video understanding of Video-LLMs. By curating 8 diverse tasks spanning perception, understanding, and reasoning, CL-VISTA induces substantial distribution shifts that effectively expose catastrophic forgetting. To systematically assess CL methods, we establish a comprehensive evaluation framework comprising 6 distinct protocols across 3 critical dimensions: performance, computational efficiency, and memory footprint. Notably, the performance dimension incorporates a general video understanding assessment to assess whether CL methods genuinely enhance foundational intelligence or merely induce task-specific overfitting. Extensive benchmarking of 10 mainstream CL methods reveals a fundamental trade-off: no single approach achieves universal superiority across all dimensions. Methods that successfully mitigate catastrophic forgetting tend to compromise generalization or incur prohibitive computational and memory overheads. We hope CL-VISTA provides critical insights for advancing continual learning in multimodal foundation models.