FD-VLA: Force-Distilled Vision-Language-Action Model for Contact-Rich Manipulation

arXiv cs.RO / 3/23/2026

📰 NewsModels & Research

Key Points

  • FD-VLA introduces a Force Distilled Vision-Language-Action framework that enables force-aware reasoning in contact-rich manipulation without relying on physical force sensors.
  • It uses a Force Distillation Module to map a learnable query token, conditioned on visual observations and robot states, into a predicted force token aligned with actual force signals.
  • During inference, the distilled force token is injected into the pretrained vision-language model to preserve vision-language semantics while enabling force-aware reasoning, allowing deployment on robots lacking expensive force-torque sensors.
  • Experiments show the distilled force token can outperform direct sensor measurements and baselines, and the FDM provides an additional force-vision-state fusion prior that improves cross-modal alignment and robustness.

Abstract

Force sensing is a crucial modality for Vision-Language-Action (VLA) frameworks, as it enables fine-grained perception and dexterous manipulation in contact-rich tasks. We present Force-Distilled VLA (FD-VLA), a novel framework that integrates force awareness into contact-rich manipulation without relying on physical force sensors. The core of our approach is a Force Distillation Module (FDM), which distills force by mapping a learnable query token, conditioned on visual observations and robot states, into a predicted force token aligned with the latent representation of actual force signals. During inference, this distilled force token is injected into the pretrained VLM, enabling force-aware reasoning while preserving the integrity of its vision-language semantics. This design provides two key benefits: first, it allows practical deployment across a wide range of robots that lack expensive or fragile force-torque sensors, thereby reducing hardware cost and complexity; second, the FDM introduces an additional force-vision-state fusion prior to the VLM, which improves cross-modal alignment and enhances perception-action robustness in contact-rich scenarios. Surprisingly, our physical experiments show that the distilled force token outperforms direct sensor force measurements as well as other baselines, which highlights the effectiveness of this force-distilled VLA approach.