RAAP: Retrieval-Augmented Affordance Prediction with Cross-Image Action Alignment

arXiv cs.RO / 4/1/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces RAAP, a retrieval-augmented framework for predicting object affordances to support fine-grained robot interactions in unstructured environments.
  • RAAP improves robustness by decoupling static contact localization from dynamic action direction, transferring contact points using dense correspondence and predicting action directions via a retrieval-augmented alignment model.
  • The model uses dual-weighted attention to consolidate multiple retrieved references, aiming to reduce failures from sparse or incomplete retrieval coverage.
  • Experiments train RAAP on compact subsets of DROID and HOI4D with as few as tens of samples per task, showing consistent generalization to unseen objects and categories.
  • The authors report zero-shot robotic manipulation results in both simulation and real-world settings, and provide a project website for reference and reproducibility.

Abstract

Understanding object affordances is essential for enabling robots to perform purposeful and fine-grained interactions in diverse and unstructured environments. However, existing approaches either rely on retrieval, which is fragile due to sparsity and coverage gaps, or on large-scale models, which frequently mislocalize contact points and mispredict post-contact actions when applied to unseen categories, thereby hindering robust generalization. We introduce Retrieval-Augmented Affordance Prediction (RAAP), a framework that unifies affordance retrieval with alignment-based learning. By decoupling static contact localization and dynamic action direction, RAAP transfers contact points via dense correspondence and predicts action directions through a retrieval-augmented alignment model that consolidates multiple references with dual-weighted attention. Trained on compact subsets of DROID and HOI4D with as few as tens of samples per task, RAAP achieves consistent performance across unseen objects and categories, and enables zero-shot robotic manipulation in both simulation and the real world. Project website: https://github.com/SEU-VIPGroup/RAAP.