MetaTune: Adjoint-based Meta-tuning via Robotic Differentiable Dynamics
arXiv cs.RO / 3/31/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- MetaTune is a unified framework that jointly auto-tunes feedback controllers and disturbance observers using differentiable closed-loop meta-learning for robotic systems under uncertainty.
- The method employs a portable neural policy together with physics-informed gradients from differentiable system dynamics, enabling adaptive controller gains across tasks and operating conditions.
- An adjoint-based technique computes meta-gradients backward in time to directly minimize the cost-to-go, reducing computational complexity from forward-horizon methods to linear in the data horizon.
- Experiments on quadrotor control report consistent gains over existing differentiable tuning approaches, with gradient computation time reduced by more than 50%.
- In PX4-Gazebo hardware-in-the-loop simulation, the learned adaptive policy improves tracking error by 15–20% at aggressive speeds, up to 40% under strong disturbances, and achieves zero-shot sim-to-sim transfer without fine-tuning.



