VTouch++: A Multimodal Dataset with Vision-Based Tactile Enhancement for Bimanual Manipulation

arXiv cs.RO / 4/23/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • The VTOUCH dataset is introduced to improve learning for bimanual manipulation in contact-rich tasks by providing rich physical interaction signals from vision-based tactile sensing.
  • The dataset uses a matrix-style task design to support systematic learning and addresses prior gaps in dataset organization and coverage.
  • Automated, demand-driven data collection pipelines are used to enable scalability with real-world scenarios.
  • The paper validates VTOUCH via extensive quantitative experiments, including cross-modal retrieval and real-robot evaluations, and shows generalizable performance across multiple robots, policies, and tasks.

Abstract

Embodied intelligence has advanced rapidly in recent years; however, bimanual manipulation-especially in contact-rich tasks remains challenging. This is largely due to the lack of datasets with rich physical interaction signals, systematic task organization, and sufficient scale. To address these limitations, we introduce the VTOUCH dataset. It leverages vision based tactile sensing to provide high-fidelity physical interaction signals, adopts a matrix-style task design to enable systematic learning, and employs automated data collection pipelines covering real-world, demand-driven scenarios to ensure scalability. To further validate the effectiveness of the dataset, we conduct extensive quantitative experiments on cross-modal retrieval as well as real-robot evaluation. Finally, we demonstrate real-world performance through generalizable inference across multiple robots, policies, and tasks.