VISION-SLS: Safe Perception-Based Control from Learned Visual Representations via System Level Synthesis
arXiv cs.LG / 4/29/2026
💬 OpinionDeveloper Stack & InfrastructureModels & Research
Key Points
- VISION-SLS is a control method that uses high-resolution RGB images to compute nonlinear output-feedback control with robust constraint-satisfaction guarantees under calibrated uncertainty.
- The approach combines a learned low-dimensional observation map built from pretrained visual features (with state-dependent error bounds) and a causal affine time-varying output-feedback policy optimized via System Level Synthesis (SLS).
- The authors introduce a scalable solver for a resulting nonconvex optimization problem by using sequential convex programming and efficient Riccati recursions.
- Experiments on simulated 4D car, 10D quadrotor, and a 59D humanoid with partial observability show safe, information-gathering behavior and constraint satisfaction using empirically calibrated error bounds.
- Hardware validation demonstrates safe ground-vehicle control from onboard images, with improved safety rate and solve time versus baselines, and the code is published on GitHub.
Related Articles
LLMs will be a commodity
Reddit r/artificial

What it feels like to have to have Qwen 3.6 or Gemma 4 running locally
Reddit r/LocalLLaMA

From Fault Codes to Smart Fixes: How Google Cloud NEXT ’26 Inspired My AI Mechanic Assistant
Dev.to

Dex lands $5.3M to grow its AI-driven talent matching platform
Tech.eu

7 OpenClaw Money-Making Cases in One Week — and the Hidden Cost Problem Behind Them
Dev.to