AI Navigate

Part-Aware Open-Vocabulary 3D Affordance Grounding via Prototypical Semantic and Geometric Alignment

arXiv cs.CV / 3/19/2026

📰 NewsModels & Research

Key Points

  • The paper presents a two-stage cross-modal framework for open-vocabulary 3D affordance grounding to improve semantic and geometric alignment.
  • Stage 1 uses large language models to generate part-aware instructions that recover missing semantics and link semantically similar affordances.
  • Stage 2 introduces Affordance Prototype Aggregation (APA) for cross-object geometric consistency and Intra-Object Relational Modeling (IORM) for refining within-object geometry to support precise semantic alignment.
  • Robust experiments on a new benchmark and two existing benchmarks show superior performance compared with existing methods.

Abstract

Grounding natural language questions to functionally relevant regions in 3D objects -- termed language-driven 3D affordance grounding -- is essential for embodied intelligence and human-AI interaction. Existing methods, while progressing from label-based to language-driven approaches, still face challenges in open-vocabulary generalization, fine-grained geometric alignment, and part-level semantic consistency. To address these issues, we propose a novel two-stage cross-modal framework that enhances both semantic and geometric representations for open-vocabulary 3D affordance grounding. In the first stage, large language models generate part-aware instructions to recover missing semantics, enabling the model to link semantically similar affordances. In the second stage, we introduce two key components: Affordance Prototype Aggregation (APA), which captures cross-object geometric consistency for each affordance, and Intra-Object Relational Modeling (IORM), which refines geometric differentiation within objects to support precise semantic alignment. We validate the effectiveness of our method through extensive experiments on a newly introduced benchmark, as well as two existing benchmarks, demonstrating superior performance in comparison with existing methods.