Symmetry-Guided Memory Augmentation for Efficient Locomotion Learning

arXiv cs.RO / 3/26/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Symmetry-Guided Memory Augmentation (SGMA) to make reinforcement learning for legged locomotion more data-efficient by reusing structured experience rather than requiring extra environment interactions.
  • SGMA generates physically consistent training variations using robot/task symmetries and extends these transformations to the policy’s memory states to preserve task-relevant context.
  • The authors demonstrate SGMA on quadruped and humanoid locomotion tasks in simulation, and also validate it on a real quadruped robot.
  • Experiments across challenging settings like joint failures and payload changes show that the approach can train policies efficiently while retaining robust locomotion performance.

Abstract

Training reinforcement learning (RL) policies for legged locomotion often requires extensive environment interactions, which are costly and time-consuming. We propose Symmetry-Guided Memory Augmentation (SGMA), a framework that improves training efficiency by combining structured experience augmentation with memory-based context inference. Our method leverages robot and task symmetries to generate additional, physically consistent training experiences without requiring extra interactions. To avoid the pitfalls of naive augmentation, we extend these transformations to the policy's memory states, enabling the agent to retain task-relevant context and adapt its behavior accordingly. We evaluate the approach on quadruped and humanoid robots in simulation, as well as on a real quadruped platform. Across diverse locomotion tasks involving joint failures and payload variations, our method achieves efficient policy training while maintaining robust performance, demonstrating a practical route toward data-efficient RL for legged robots.