A Model-based Visual Contact Localization and Force Sensing System for Compliant Robotic Grippers
arXiv cs.CV / 5/4/2026
📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes a model-based visual contact localization and force sensing system to estimate grasp forces without damaging delicate objects using deformable robotic grippers.
- Instead of relying on brittle end-to-end deep learning, the system combines wrist RGB-D visual keypoints with an inverse finite element analysis simulation to relate observed deformation to force.
- An iterative contact localization module uses a deep learning-based online 3D reconstruction and pose estimation pipeline to update contact location and remain robust to visual occlusion and unseen objects.
- Experiments with fin-ray-shaped soft grippers show strong accuracy, achieving 0.23 N RMSE (2.11% NRMSD) during loading and 0.48 N RMSE (4.34% NRMSD) across the full grasp process under varied conditions and objects.
Related Articles
AnnouncementsBuilding a new enterprise AI services company with Blackstone, Hellman & Friedman, and Goldman Sachs
Anthropic News

Dara Khosrowshahi on replacing Uber drivers — and himself — with AI
The Verge

CLMA Frame Test
Dev.to

You Are Right — You Don't Need CLAUDE.md
Dev.to

Governance and Liability in AI Agents: What I Built Trying to Answer Those Questions
Dev.to