Rectify, Don't Regret: Avoiding Pitfalls of Differentiable Simulation in Trajectory Prediction
arXiv cs.RO / 3/25/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that fully differentiable closed-loop trajectory simulators can suffer from shortcut learning, where backpropagated gradients leak future ground-truth information through induced state inputs.
- It proposes a “detached receding horizon rollout” method that explicitly breaks the computation graph across simulation steps to prevent non-causal “regret-based” optimization of past predictions.
- Experiments on nuScenes and DeepScenario show the approach improves robustness of recovery from drifted states, reducing target collisions by up to 33.24% versus fully differentiable closed-loop training at high replanning frequencies.
- Compared with standard open-loop baselines, the non-differentiable training framework also reduces collisions by up to 27.74% in dense environments while improving multi-modal prediction diversity and lane alignment.
Related Articles

Black Hat Asia
AI Business

"The Agent Didn't Decide Wrong. The Instructions Were Conflicting — and Nobody Noticed."
Dev.to
Top 5 LLM Gateway Alternatives After the LiteLLM Supply Chain Attack
Dev.to

Stop Counting Prompts — Start Reflecting on AI Fluency
Dev.to

Reliable Function Calling in Deeply Recursive Union Types: Fixing Qwen Models' Double-Stringify Bug
Dev.to