Lyapunov-Guided Self-Alignment: Test-Time Adaptation for Offline Safe Reinforcement Learning

arXiv cs.AI / 4/30/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes SAS (Self-Alignment for Safety), a transformer-based method for offline safe reinforcement learning that adapts at test time without retraining or parameter updates.
  • SAS uses a self-alignment mechanism where the agent imagines multiple trajectories and selects only the segments that satisfy a Lyapunov safety condition.
  • The selected, feasible trajectory segments are then reused as in-context prompts, effectively steering the agent back toward safe behavior during deployment.
  • The authors interpret SAS as converting Lyapunov-guided imagination into control-invariant prompting, with transformer prompting viewed through a hierarchical RL/Bayesian inference lens over latent skills.
  • Experiments on Safety Gymnasium and MuJoCo show SAS reduces cost and failures while maintaining or improving reward/return compared with baselines.

Abstract

Offline reinforcement learning (RL) agents often fail when deployed, as the gap between training datasets and real environments leads to unsafe behavior. To address this, we present SAS (Self-Alignment for Safety), a transformer-based framework that enables test-time adaptation in offline safe RL without retraining. In SAS, the main mechanism is self-alignment: at test time, the pretrained agent generates several imagined trajectories and selects those satisfying the Lyapunov condition. These feasible segments are then recycled as in-context prompts, allowing the agent to realign its behavior toward safety while avoiding parameter updates. In effect, SAS turns Lyapunov-guided imagination into control-invariant prompts, and its transformer architecture admits a hierarchical RL interpretation where prompting functions as Bayesian inference over latent skills. Across Safety Gymnasium and MuJoCo benchmarks, SAS consistently reduces cost and failure while maintaining or improving return.