Continual Learning with Vision-Language Models via Semantic-Geometry Preservation
arXiv cs.CV / 3/13/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper identifies semantic geometry drift as a key challenge in continual learning for vision-language models and proposes an exemplar-free method to address it.
- It introduces Semantic Geometry Preservation for Continual Learning (SeGP-CL), which constructs a compact set of adversarial anchors using dual-targeted projected gradient descent to steer new-task seeds toward old-class semantics while staying faithful in raw visual space.
- Training with SeGP-CL combines anchor-guided cross-modal geometry distillation (ACGD) to preserve cross-modal structure and a lightweight text semantic-geometry regularization (TSGR) to stabilize the textual reference frame.
- Experiments on five continual learning benchmarks demonstrate improved stability and forward transfer, achieving state-of-the-art results while better preserving the semantic geometry of vision-language models.
Related Articles

Astral to Join OpenAI
Dev.to

PearlOS. We gave swarm intelligence a local desktop environment and code control to self-evolve. Has been pretty incredible to see so far. Open source and free if you want your own.
Reddit r/LocalLLaMA

Why Data is Important for LLM
Dev.to

The Inference Market Is Consolidating. Agent Payments Are Still Nobody's Problem.
Dev.to

YouTube's Deepfake Shield for Politicians Changes Evidence Forever
Dev.to