Generalizable task-oriented object grasping through LLM-guided ontology and similarity-based planning

arXiv cs.RO / 3/30/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper tackles task-oriented object grasping (TOG) by improving generalization across diverse objects and tasks, which existing vision-language model methods struggle with due to instability in part recognition and grasp inference.
  • It proposes an LLM-constructed object-part-task ontology that maps intuitive human commands to functional object-part selection without relying on semantic cues from visual recognition.
  • For part identification, it uses sampling-based geometric analysis over observed point clouds with multiple point-distribution and distance metrics to reduce viewpoint sensitivity.
  • For unknown targets, it applies similarity-based matching to imitate grasps from pre-segmented and pre-known reference objects, enabling planning guidance without explicit prior knowledge of the new object.
  • Real-world experiments confirm accuracy in part selection, identification, and grasp generation, and the method demonstrates generalization to novel-category objects by extending the existing ontological knowledge.

Abstract

Task-oriented grasping (TOG) is more challenging than simple object grasping because it requires precise identification of object parts and careful selection of grasping areas to ensure effective and robust manipulation. While recent approaches have trained large-scale vision-language models to integrate part-level object segmentation with task-aware grasp planning, their instability in part recognition and grasp inference limits their ability to generalize across diverse objects and tasks. To address this issue, we introduce a novel, geometry-centric strategy for more generalizable TOG that does not rely on semantic features from visual recognition, effectively overcoming the viewpoint sensitivity of model-based approaches. Our main proposals include: 1) an object-part-task ontology for functional part selection based on intuitive human commands, constructed using a Large Language Model (LLM); 2) a sampling-based geometric analysis method for identifying the selected object part from observed point clouds, incorporating multiple point distribution and distance metrics; and 3) a similarity matching framework for imitative grasp planning, utilizing similar known objects with pre-existing segmentation and grasping knowledge as references to guide the planning for unknown targets. We validate the high accuracy of our approach in functional part selection, identification, and grasp generation through real-world experiments. Additionally, we demonstrate the method's generalization capabilities to novel-category objects by extending existing ontological knowledge, showcasing its adaptability to a broad range of objects and tasks.