Constraint-Aware Reinforcement Learning via Adaptive Action Scaling

arXiv cs.RO / 4/3/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses safe reinforcement learning by reducing constraint violations during exploration while maintaining strong task performance.
  • Instead of using a single conflicting reward/safety policy or an external hard safety filter, it introduces a modular cost-aware regulator that adaptively scales actions based on predicted constraint violations.
  • The regulator is designed to modulate actions smoothly to preserve exploration, while also avoiding degenerate suppression where the agent becomes overly constrained.
  • Experiments show the method integrates with off-policy RL algorithms like SAC and TD3, achieving state-of-the-art return-to-cost ratios on Safety Gym locomotion tasks with sparse costs.
  • Reported results include up to 126× fewer constraint violations and more than an order-of-magnitude increase in returns versus prior approaches.

Abstract

Safe reinforcement learning (RL) seeks to mitigate unsafe behaviors that arise from exploration during training by reducing constraint violations while maintaining task performance. Existing approaches typically rely on a single policy to jointly optimize reward and safety, which can cause instability due to conflicting objectives, or they use external safety filters that override actions and require prior system knowledge. In this paper, we propose a modular cost-aware regulator that scales the agent's actions based on predicted constraint violations, preserving exploration through smooth action modulation rather than overriding the policy. The regulator is trained to minimize constraint violations while avoiding degenerate suppression of actions. Our approach integrates seamlessly with off-policy RL methods such as SAC and TD3, and achieves state-of-the-art return-to-cost ratios on Safety Gym locomotion tasks with sparse costs, reducing constraint violations by up to 126 times while increasing returns by over an order of magnitude compared to prior methods.