Encoder-Free Human Motion Understanding via Structured Motion Descriptions

arXiv cs.CV / 4/24/2026

📰 NewsSignals & Early TrendsTools & Practical UsageModels & Research

Key Points

  • The paper introduces Structured Motion Description (SMD), a rule-based method that converts human joint position sequences into structured natural-language descriptions of joint angles, body-part movements, and global trajectory.
  • By representing motion as text, SMD lets LLMs use their existing pretrained body-part and movement semantics without training learned motion encoders or cross-modal alignment modules.
  • The authors report new state-of-the-art performance on motion question answering (66.7% on BABEL-QA and 90.1% on HuMMan-QA) and motion captioning (HumanML3D: R@1 = 0.584 and CIDEr = 53.16).
  • SMD is portable across different LLMs because the same text input can be reused with only lightweight LoRA adaptation, tested across 8 LLMs from 6 model families.
  • The text-based motion representation is human-readable and supports interpretable attention analysis over motion descriptions, with code/data and pretrained LoRA adapters released publicly.

Abstract

The world knowledge and reasoning capabilities of text-based large language models (LLMs) are advancing rapidly, yet current approaches to human motion understanding, including motion question answering and captioning, have not fully exploited these capabilities. Existing LLM-based methods typically learn motion-language alignment through dedicated encoders that project motion features into the LLM's embedding space, remaining constrained by cross-modal representation and alignment. Inspired by biomechanical analysis, where joint angles and body-part kinematics have long served as a precise descriptive language for human movement, we propose \textbf{Structured Motion Description (SMD)}, a rule-based, deterministic approach that converts joint position sequences into structured natural language descriptions of joint angles, body part movements, and global trajectory. By representing motion as text, SMD enables LLMs to apply their pretrained knowledge of body parts, spatial directions, and movement semantics directly to motion reasoning, without requiring learned encoders or alignment modules. We show that this approach goes beyond state-of-the-art results on both motion question answering (66.7\% on BABEL-QA, 90.1\% on HuMMan-QA) and motion captioning (R@1 of 0.584, CIDEr of 53.16 on HumanML3D), surpassing all prior methods. SMD additionally offers practical benefits: the same text input works across different LLMs with only lightweight LoRA adaptation (validated on 8 LLMs from 6 model families), and its human-readable representation enables interpretable attention analysis over motion descriptions. Code, data, and pretrained LoRA adapters are available at https://yaozhang182.github.io/motion-smd/.