AFFORD2ACT: Affordance-Guided Automatic Keypoint Selection for Generalizable and Lightweight Robotic Manipulation

arXiv cs.RO / 4/17/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes AFFORD2ACT, a vision-based robotic manipulation framework that selects a minimal, manipulation-relevant set of semantic 2D keypoints using an affordance-guided approach.
  • It reduces computational burden by avoiding dense image/point-cloud inputs and instead distills keypoints from a text prompt and a single image, mitigating the influence of irrelevant background features.
  • AFFORD2ACT uses a three-stage pipeline (affordance filtering, category-level keypoint construction, and transformer-based policy learning with embedded gating) to focus reasoning on the most relevant keypoints.
  • The resulting policy is lightweight—a compact 38-dimensional state policy—and can be trained quickly (about 15 minutes) without relying on proprioception or dense representations.
  • Across diverse real-world manipulation tasks, AFFORD2ACT improves data efficiency and reports an 82% success rate on unseen objects, new categories, different backgrounds, and distractors.

Abstract

Vision-based robot learning often relies on dense image or point-cloud inputs, which are computationally heavy and entangle irrelevant background features. Existing keypoint-based approaches can focus on manipulation-centric features and be lightweight, but either depend on manual heuristics or task-coupled selection, limiting scalability and semantic understanding. To address this, we propose AFFORD2ACT, an affordance-guided framework that distills a minimal set of semantic 2D keypoints from a text prompt and a single image. AFFORD2ACT follows a three-stage pipeline: affordance filtering, category-level keypoint construction, and transformer-based policy learning with embedded gating to reason about the most relevant keypoints, yielding a compact 38-dimensional state policy that can be trained in 15 minutes, which performs well in real-time without proprioception or dense representations. Across diverse real-world manipulation tasks, AFFORD2ACT consistently improves data efficiency, achieving an 82% success rate on unseen objects, novel categories, backgrounds, and distractors.