VLA Models Are More Generalizable Than You Think: Revisiting Physical and Spatial Modeling
arXiv cs.RO / 4/1/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper finds that VLA models’ poor robustness to new camera viewpoints and visual perturbations is mainly caused by misalignment in spatial modeling rather than physical modeling.
- It introduces a one-shot adaptation approach that uses lightweight, learnable updates to recalibrate visual representations for better out-of-distribution viewpoint performance.
- Feature Token Modulation (FTM) applies a global affine transform to visual tokens and improves Libero viewpoint accuracy from 48.5% to 87.1% using only 4K parameters.
- Feature Linear Adaptation (FLA) uses low-rank updates to the ViT encoder, reaching 90.8% success with 4.7M parameters, comparable to LoRA-scale finetuning but at much lower cost.
- The results suggest pretrained VLA models may have significant untapped robustness, and that minimal targeted visual adaptation can effectively restore generalization.
Related Articles

Knowledge Governance For The Agentic Economy.
Dev.to

AI server farms heat up the neighborhood for miles around, paper finds
The Register
Does the Claude “leak” actually change anything in practice?
Reddit r/LocalLLaMA

87.4% of My Agent's Decisions Run on a 0.8B Model
Dev.to

AIエージェントをソフトウェアチームに変える無料ツール「Paperclip」
Dev.to