AEGIS: Anchor-Enforced Gradient Isolation for Knowledge-Preserving Vision-Language-Action Fine-Tuning

arXiv cs.LG / 4/20/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper identifies a cross-modal gradient asymmetry problem when fine-tuning pre-trained vision-language models (VLMs) for robotic control using high-magnitude continuous gradients from an action expert, which quickly erodes VQA performance.
  • It argues that existing defenses like stop-gradient or LoRA prevent or constrain learning but can still discard valuable continuous supervision or overwrite the pre-trained semantic manifold.
  • The authors propose AEGIS, an anchor-enforced, layer-wise orthogonal gradient projection method that allows continuous MSE-style learning while preserving the original VQA manifold without co-training data or replay buffers.
  • AEGIS works by computing a static Gaussian “anchor” from masked VQA passes, then computing a Wasserstein-2-based anchor restoration gradient and applying Gram–Schmidt orthogonal projections per transformer layer to redirect destructive gradient components.
  • Experiments (as described) show that AEGIS sacrifices under 1% of average gradient energy while preventing cumulative activation drift and severe forgetting of the VLM’s VQA capability.

Abstract

Adapting pre-trained vision-language models (VLMs) for robotic control requires injecting high-magnitude continuous gradients from a flow-matching action expert into a backbone trained exclusively with cross-entropy. This cross-modal gradient asymmetry - the spectral dimensionality mismatch between low-rank MSE regression gradients and the high-dimensional semantic manifold sculpted by CE pre-training, causes rapid, severe erosion of the VLM's visual-question-answering (VQA) capability. Industry-standard defences either sever the gradient pathway entirely via stop gradient, discarding the rich continuous supervision, or restrict parameter capacity through low-rank adapters (LoRA) that constrain the rank of updates but not their direction, and thus still overwrite the pre-trained manifold. We introduce AEGIS (Anchor-Enforced Gradient Isolation System): a buffer-free, layer-wise orthogonal gradient projection framework that enables direct continuous MSE learning while preserving the pre-trained VQA manifold - without any co-training data or replay buffer. AEGIS pre-computes a static Gaussian reference anchor from masked VQA forward passes across all transformer layers, then at each training step constructs a Wasserstein-2 transport penalty that generates an anchor restoration gradient. A sequential dual-backward decomposes the task and anchor gradients; for each transformer layer, AEGIS applies a single Gram-Schmidt orthogonal projection that bends the task gradient away from the destructive direction while preserving its constructive content. The projection sheds less than 1% of gradient energy on average, yet eliminates the cumulative activation drift that drives severe forgetting.