TAIHRI: Task-Aware 3D Human Keypoints Localization for Close-Range Human-Robot Interaction

arXiv cs.CV / 4/13/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces TAIHRI, a Vision-Language Model designed specifically for close-range human-robot interaction to improve 3D human keypoint localization for task-critical body parts.
  • It addresses limitations of prior whole-body/root-focused 3D keypoint methods by targeting metric-scale accuracy in an egocentric camera 3D coordinate system.
  • TAIHRI quantizes 3D keypoints into a finite interaction space and performs 2D keypoint reasoning through next-token prediction to recover precise 3D coordinates.
  • The approach can adapt to downstream HRI tasks, including natural-language control and global space human mesh recovery.
  • Experiments on egocentric interaction benchmarks report superior accuracy for task-relevant keypoints, and the authors provide code via the referenced GitHub repository.

Abstract

Accurate 3D human keypoints localization is a critical technology enabling robots to achieve natural and safe physical interaction with users. Conventional 3D human keypoints estimation methods primarily focus on the whole-body reconstruction quality relative to the root joint. However, in practical human-robot interaction (HRI) scenarios, robots are more concerned with the precise metric-scale spatial localization of task-relevant body parts under the egocentric camera 3D coordinate. We propose TAIHRI, the first Vision-Language Model (VLM) tailored for close-range HRI perception, capable of understanding users' motion commands and directing the robot's attention to the most task-relevant keypoints. By quantizing 3D keypoints into a finite interaction space, TAIHRI precisely localize the 3D spatial coordinates of critical body parts by 2D keypoint reasoning via next token prediction, and seamlessly adapt to downstream tasks such as natural language control or global space human mesh recovery. Experiments on egocentric interaction benchmarks demonstrate that TAIHRI achieves superior estimation accuracy for task-critical body parts. We believe TAIHRI opens new research avenues in the field of embodied human-robot interaction. Code is available at: https://github.com/Tencent/TAIHRI.