TrackDeform3D: Markerless and Autonomous 3D Keypoint Tracking and Dataset Collection for Deformable Objects
arXiv cs.CV / 3/19/2026
📰 NewsTools & Practical UsageModels & Research
Key Points
- TrackDeform3D presents an autonomous framework that uses RGB-D cameras to collect 3D datasets of deformable objects without manual annotation or motion capture setups.
- The method identifies 3D keypoints and robustly tracks their trajectories, incorporating motion consistency constraints for temporally smooth and geometrically coherent data.
- The approach shows consistent improvements in geometric and tracking accuracy compared to state-of-the-art methods across diverse object categories.
- The paper provides a high-quality, large-scale dataset containing 6 deformable objects and 110 minutes of trajectory data to support downstream tasks such as dynamics modeling and motion planning.
- By reducing data collection costs and reliance on labor-intensive labeling, TrackDeform3D aims to accelerate research and development in deformable object perception.
Related Articles
ベテランの若手育成負担を減らせ、PLC制御の「ラダー図」をAIで生成
日経XTECH
Let AI Control Your Real Browser — Not a Throwaway One
Dev.to
How I Launched a Steam Store Page in 10 Days using Spec-Driven Development (SDD)
Dev.to
Google Stitch 2.0: Import Any Website's Design System Into Your AI-Generated App
Dev.to
[P] I got tired of spending more time on data prep than training, so I built a platform with pre-cleaned datasets ready for fine-tuning
Reddit r/MachineLearning