Neural Assistive Impulses: Synthesizing Exaggerated Motions for Physics-based Characters

arXiv cs.AI / 4/8/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses a core challenge in physics-based character animation: data-driven DRL methods can learn complex skills but fail to reproduce exaggerated, stylized motions that would normally violate physics constraints (e.g., instantaneous dashes or mid-air trajectory changes).
  • It identifies the root cause as treating the character as an underactuated floating-base system where internal torques and momentum conservation dominate, making direct enforcement of infeasible motions via external wrenches unstable.
  • The proposed method, Assistive Impulse Neural Control, shifts the assistance formulation from force space to impulse space to mitigate training instability caused by velocity discontinuities and force spikes.
  • The framework splits the assistive signal into an analytic high-frequency component from inverse dynamics and a learned low-frequency residual, using a hybrid neural policy to improve numerical stability and control.
  • Experiments reportedly show robust tracking of highly agile maneuvers that were previously intractable for conventional physics-based approaches, expanding what animations can be synthesized reliably.

Abstract

Physics-based character animation has become a fundamental approach for synthesizing realistic, physically plausible motions. While current data-driven deep reinforcement learning (DRL) methods can synthesize complex skills, they struggle to reproduce exaggerated, stylized motions, such as instantaneous dashes or mid-air trajectory changes, which are required in animation but violate standard physical laws. The primary limitation stems from modeling the character as an underactuated floating-base system, in which internal joint torques and momentum conservation strictly govern motion. Direct attempts to enforce such motions via external wrenches often lead to training instability, as velocity discontinuities produce sparse, high-magnitude force spikes that prevent policy convergence. We propose Assistive Impulse Neural Control, a framework that reformulates external assistance in impulse space rather than force space to ensure numerical stability. We decompose the assistive signal into an analytic high-frequency component derived from Inverse Dynamics and a learned low-frequency residual correction, governed by a hybrid neural policy. We demonstrate that our method enables robust tracking of highly agile, dynamically infeasible maneuvers that were previously intractable for physics-based methods.