SuperGrasp: Single-View Object Grasping via Superquadric Similarity Matching, Evaluation, and Refinement
arXiv cs.RO / 4/1/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces SuperGrasp, a two-stage framework for robotic single-view object grasping using parallel-jaw grippers that separates initial grasp generation from later evaluation/refinement.
- It proposes a Similarity Matching Module that retrieves grasp candidates by matching an input single-view point cloud to a precomputed primitive dataset using superquadric coefficients.
- For refinement, it presents E-RNet, an end-to-end network that enlarges the grasp-aware region and uses the initial grasp closure region as a local anchor to improve stability and validity.
- The authors build a primitive dataset (1.5k primitives) and a large training dataset (100k stable grasp labels across 124 objects) to improve generalization.
- Experiments in both simulation and real-world settings show stable grasping and strong generalization to new scenes and novel objects.
Related Articles

Knowledge Governance For The Agentic Economy.
Dev.to

AI server farms heat up the neighborhood for miles around, paper finds
The Register
Does the Claude “leak” actually change anything in practice?
Reddit r/LocalLLaMA

87.4% of My Agent's Decisions Run on a 0.8B Model
Dev.to

AIエージェントをソフトウェアチームに変える無料ツール「Paperclip」
Dev.to