DSCA: Dynamic Subspace Concept Alignment for Lifelong VLM Editing
arXiv cs.CV / 4/10/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper tackles lifelong knowledge editing for Vision Language Models (VLMs), highlighting how sequential edits can cause catastrophic forgetting, degraded reasoning, and cross-modal misalignment.
- It argues that existing VLM editing methods still operate in entangled shared representation spaces and therefore suffer structural interference, even when they use gated adapters, activation edits, or parameter merging.
- The proposed Dynamic Subspace Concept Alignment (DSCA) decomposes the representation space into orthogonal semantic subspaces (via incremental clustering and PCA) and performs edits only within these transformed spaces to structurally isolate concepts.
- DSCA freezes the base model and uses a multi-term loss to preserve task fidelity, enforce edit locality, and maintain cross-modal alignment, yielding reported gains in single-edit success and long-sequence stability.
Related Articles

Black Hat Asia
AI Business

GLM 5.1 tops the code arena rankings for open models
Reddit r/LocalLLaMA

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

My Bestie Built a Free MCP Server for Job Search — Here's How It Works
Dev.to
can we talk about how AI has gotten really good at lying to you?
Reddit r/artificial