Breaking Lock-In: Preserving Steerability under Low-Data VLA Post-Training

arXiv cs.RO / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies “lock-in” in vision-language-action (VLA) policies after low-data supervised fine-tuning, where the model becomes overly specialized and stops handling novel instructions.
  • It characterizes two failure modes—concept lock-in (over-fixation on training objects/attributes) and spatial lock-in (over-fixation on training spatial targets).
  • The authors propose DeLock, which mitigates lock-in by preserving visual grounding during post-training and using test-time contrastive prompt guidance to steer the policy’s denoising dynamics.
  • Across eight simulation and real-world evaluations, DeLock outperforms strong baselines and can match or exceed the performance of a state-of-the-art generalist VLA post-trained with much more curated data.
  • The approach reduces the need for extra supervision signals or augmented datasets by leveraging the model’s internal pre-trained knowledge during post-training.

Abstract

Have you ever post-trained a generalist vision-language-action (VLA) policy on a small demonstration dataset, only to find that it stops responding to new instructions and is limited to behaviors observed during post-training? We identify this phenomenon as lock-in: after low-data, supervised fine-tuning (SFT), the policy becomes overly specialized to the post-training data and fails to generalize to novel instructions, manifesting as concept lock-in (fixation on training objects/attributes) and spatial lock-in (fixation on training spatial targets). Many existing remedies introduce additional supervision signals, such as those derived from foundation models or auxiliary objectives, or rely on augmented datasets to recover generalization. In this paper, we show that the policy's internal pre-trained knowledge is sufficient: DeLock mitigates lock-in by preserving visual grounding during post-training and applying test-time contrastive prompt guidance to steer the policy's denoising dynamics according to novel instructions. Across eight simulation and real-world evaluations, DeLock consistently outperforms strong baselines and matches or exceeds the performance of a state-of-the-art generalist policy post-trained with substantially more curated demonstrations.