AFFORD2ACT: Affordance-Guided Automatic Keypoint Selection for Generalizable and Lightweight Robotic Manipulation

Affordance-Guided Automatic Keypoint Selection for Generalizable and Lightweight Robotic Manipulation

Original Prompt

Rephrasing

Abstract

AFFORD2ACT consistently improves data efficiency, achieving an 82% success rate on unseen objects, novel categories, backgrounds, and distractors.

AFFORD2ACT, an affordance-guided framework that distills a minimal set of semantic 2D keypoints from a text prompt and a single image. AFFORD2ACT follows a three-stage pipeline: affordance filtering, category-level keypoint construction, and transformer-based policy learning with embedded gating to reason about the most relevant keypoints, yielding a compact 38-dimensional state policy that can be trained in 15 minutes, which performs well in real-time without proprioception or dense representations.

Works with Environmental Distractions

Motion distractors

Lighting distractors

How Attention Makes Us Robust : Dynamic Attention Visualization on Keypoints

Attention Grows on Keypoints as Gripper tries to Align

Generalize Across Unseen Categories

Generalization on Unseen Objects, reference object in the bottom left