Learning Control Policies to Provably Satisfy Hard Affine Constraints for Black-Box Hybrid Dynamical Systems
arXiv cs.RO / 4/27/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper addresses safety verification for black-box hybrid dynamical systems, where unknown nonlinear dynamics and instantaneous state jumps (impacts/reset maps) make strict constraint satisfaction difficult.
- It proposes learning reinforcement learning (RL) control policies that are constrained to be affine, with the policy made “repulsive” near constraint boundaries to ensure trajectories provably respect affine state constraints in closed loop.
- To handle safety violations caused by hybrid impacts, it introduces an additional repulsive affine region before the reset so that post-reset states also remain within the constraints.
- The authors provide sufficient theoretical conditions guaranteeing closed-loop safety, and they validate the method against reward-shaping and learned control barrier function (CBF) baselines on hybrid benchmarks (e.g., constrained pendulum and paddle juggler), showing improved policy quality while always maintaining safety.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Everyone Wants AI Agents. Fewer Teams Are Ready for the Messy Business Context Behind Them
Dev.to
AI 编程工具对比 2026:Claude Code vs Cursor vs Gemini CLI vs Codex
Dev.to

How I Improved My YouTube Shorts and Podcast Audio Workflow with AI Tools
Dev.to

An improvement of the convergence proof of the ADAM-Optimizer
Dev.to