HandVQA: Diagnosing and Improving Fine-Grained Spatial Reasoning about Hands in Vision-Language Models

arXiv cs.CV / 3/30/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • HandVQA is introduced as a large-scale diagnostic benchmark to measure how well vision-language models perform fine-grained spatial reasoning about articulated hand poses.
  • The benchmark is built from high-quality 3D hand datasets and contains 1.6M+ multiple-choice visual question answering items targeting joint-level spatial attributes like angles, distances, and relative positions.
  • Evaluations across several state-of-the-art VLMs (including LLaVA and others) show systematic failure modes such as hallucinated finger parts, incorrect geometric interpretation, and weak generalization.
  • The authors report that 3D-grounded spatial knowledge learned via HandVQA transfers in a zero-shot manner, improving downstream tasks including hand gesture recognition (+10.33%) and hand-object interaction (+2.63%).

Abstract

Understanding the fine-grained articulation of human hands is critical in high-stakes settings such as robot-assisted surgery, chip manufacturing, and AR/VR-based human-AI interaction. Despite achieving near-human performance on general vision-language benchmarks, current vision-language models (VLMs) struggle with fine-grained spatial reasoning, especially in interpreting complex and articulated hand poses. We introduce HandVQA, a large-scale diagnostic benchmark designed to evaluate VLMs' understanding of detailed hand anatomy through visual question answering. Built upon high-quality 3D hand datasets (FreiHAND, InterHand2.6M, FPHA), our benchmark includes over 1.6M controlled multiple-choice questions that probe spatial relationships between hand joints, such as angles, distances, and relative positions. We evaluate several state-of-the-art VLMs (LLaVA, DeepSeek and Qwen-VL) in both base and fine-tuned settings, using lightweight fine-tuning via LoRA. Our findings reveal systematic limitations in current models, including hallucinated finger parts, incorrect geometric interpretations, and poor generalization. HandVQA not only exposes these critical reasoning gaps but provides a validated path to improvement. We demonstrate that the 3D-grounded spatial knowledge learned from our benchmark transfers in a zero-shot setting, significantly improving accuracy of model on novel downstream tasks like hand gesture recognition (+10.33%) and hand-object interaction (+2.63%).