ThinkGrasp: A Vision-Language System for Strategic Part Grasping in Clutter

arXiv cs.RO / 4/3/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • ThinkGrasp is a plug-and-play vision-language robotic grasping system designed to handle heavily cluttered scenes where occlusions make target objects difficult to perceive.
  • The method leverages GPT-4o’s contextual reasoning to identify targets and generate grasp poses, including cases where objects are partially obscured or nearly invisible.
  • It uses goal-oriented language instructions to progressively remove obstructing objects, uncovering the target and completing the grasp in only a few steps.
  • Experiments in both simulation and real-world settings show high success rates and clear improvements over state-of-the-art approaches, especially in heavy clutter and with diverse unseen objects.
  • Results indicate strong generalization performance beyond the specific objects and environments seen during evaluation.

Abstract

Robotic grasping in cluttered environments remains a significant challenge due to occlusions and complex object arrangements. We have developed ThinkGrasp, a plug-and-play vision-language grasping system that makes use of GPT-4o's advanced contextual reasoning for heavy clutter environment grasping strategies. ThinkGrasp can effectively identify and generate grasp poses for target objects, even when they are heavily obstructed or nearly invisible, by using goal-oriented language to guide the removal of obstructing objects. This approach progressively uncovers the target object and ultimately grasps it with a few steps and a high success rate. In both simulated and real experiments, ThinkGrasp achieved a high success rate and significantly outperformed state-of-the-art methods in heavily cluttered environments or with diverse unseen objects, demonstrating strong generalization capabilities.