Visual-Tactile Peg-in-Hole Assembly Learning from Peg-out-of-Hole Disassembly
arXiv cs.RO / 4/23/2026
📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research
Key Points
- The paper presents a visual-tactile learning framework for peg-in-hole (PiH) robotic assembly by using the inverse task peg-out-of-hole (PooH) disassembly to reduce exploration costs.
- It models both PooH and PiH as POMDPs in a shared visual-tactile observation space, then trains a PooH policy and converts its trajectories into expert-like data for PiH via temporal reversal and action randomization.
- During PiH execution, visual sensing is used to guide the peg-hole approach, while tactile feedback helps correct misalignment and improve contact interaction.
- Experiments across various peg-hole geometries show the method reduces contact forces by 6.4% versus single-modality baselines and achieves 87.5% success on seen objects and 77.1% on unseen objects, outperforming direct RL training by 18.1% in success rate.
- The authors provide demos, code, and datasets to support reproduction and further research at the linked project page.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans
Dev.to

Elevating Austria: Google invests in its first data center in the Alps.
Google Blog

10 AI Tools Every Developer Should Try in 2026
Dev.to

OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Dev.to