AI Navigate

Learning to Assist: Physics-Grounded Human-Human Control via Multi-Agent Reinforcement Learning

arXiv cs.CV / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper formulates the imitation of assistive, force‑exchanging human–human motions as a multi‑agent reinforcement learning problem, jointly training a supporter and a recipient in a physics simulator to track assistive motion references.
  • It introduces a partner policies initialization scheme that transfers priors from single‑human motion‑tracking controllers to improve exploration during learning.
  • It proposes dynamic reference retargeting and a contact‑promoting reward to adapt the assistant's reference motion in real time to the recipient's pose and encourage physically meaningful support.
  • The authors show that AssistMimic is the first method capable of successfully tracking assistive interaction motions on established benchmarks, demonstrating the value of a multi‑agent RL approach for physically grounded and socially aware humanoid control.

Abstract

Humanoid robotics has strong potential to transform daily service and caregiving applications. Although recent advances in general motion tracking within physics engines (GMT) have enabled virtual characters and humanoid robots to reproduce a broad range of human motions, these behaviors are primarily limited to contact-less social interactions or isolated movements. Assistive scenarios, by contrast, require continuous awareness of a human partner and rapid adaptation to their evolving posture and dynamics. In this paper, we formulate the imitation of closely interacting, force-exchanging human-human motion sequences as a multi-agent reinforcement learning problem. We jointly train partner-aware policies for both the supporter (assistant) agent and the recipient agent in a physics simulator to track assistive motion references. To make this problem tractable, we introduce a partner policies initialization scheme that transfers priors from single-human motion-tracking controllers, greatly improving exploration. We further propose dynamic reference retargeting and contact-promoting reward, which adapt the assistant's reference motion to the recipient's real-time pose and encourage physically meaningful support. We show that AssistMimic is the first method capable of successfully tracking assistive interaction motions on established benchmarks, demonstrating the benefits of a multi-agent RL formulation for physically grounded and socially aware humanoid control.