Safe Control using Learned Safety Filters and Adaptive Conformal Inference
arXiv cs.RO / 4/21/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces Adaptive Conformal Filtering (ACoFi), which combines learned Hamilton–Jacobi reachability-based safety filters with adaptive conformal inference for control systems that use an unsafe nominal policy.
- ACoFi dynamically updates its switching criteria by using the observed prediction errors and by estimating an uncertainty range for the nominal policy’s safety values.
- When the estimated range suggests the nominal action may be unsafe, the safety filter switches to a learned safe policy rather than relying on a fixed threshold.
- The method provides a “soft” safety guarantee: the rate of incorrect uncertainty quantification for the nominal policy’s predicted safety is asymptotically upper bounded by a user-specified parameter.
- Experiments in a Dubins car simulation and Safety Gymnasium show ACoFi improves over a fixed-threshold baseline, producing higher learned safety values and fewer safety violations, particularly under out-of-distribution conditions.
Related Articles

Every time a new model comes out, the old one is obsolete of course
Reddit r/LocalLLaMA

We built it during the NVIDIA DGX Spark Full-Stack AI Hackathon — and it ended up winning 1st place overall 🏆
Dev.to

Stop Losing Progress: Setting Up a Pro Jupyter Workflow in VS Code (No More Colab Timeouts!)
Dev.to

Building AgentOS: Why I’m Building the AWS Lambda for Insurance Claims
Dev.to

Where we are. In a year, everything has changed. Kimi - Minimax - Qwen - Gemma - GLM
Reddit r/LocalLLaMA