RAAP: Retrieval-Augmented Affordance Prediction with Cross-Image Action Alignment
arXiv cs.RO / 4/1/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces RAAP, a retrieval-augmented framework for predicting object affordances to support fine-grained robot interactions in unstructured environments.
- RAAP improves robustness by decoupling static contact localization from dynamic action direction, transferring contact points using dense correspondence and predicting action directions via a retrieval-augmented alignment model.
- The model uses dual-weighted attention to consolidate multiple retrieved references, aiming to reduce failures from sparse or incomplete retrieval coverage.
- Experiments train RAAP on compact subsets of DROID and HOI4D with as few as tens of samples per task, showing consistent generalization to unseen objects and categories.
- The authors report zero-shot robotic manipulation results in both simulation and real-world settings, and provide a project website for reference and reproducibility.
Related Articles

Black Hat Asia
AI Business

Knowledge Governance For The Agentic Economy.
Dev.to

AI server farms heat up the neighborhood for miles around, paper finds
The Register

Paperclip: Công Cụ Miễn Phí Biến AI Thành Đội Phát Triển Phần Mềm
Dev.to
Does the Claude “leak” actually change anything in practice?
Reddit r/LocalLLaMA