AI Navigate

Evaluating randomized smoothing as a defense against adversarial attacks in trajectory prediction

arXiv cs.LG / 3/12/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces randomized smoothing as a defense mechanism to improve robustness of trajectory prediction models against adversarial perturbations.
  • The authors evaluate multiple base trajectory prediction models across various datasets to assess robustness gains from randomized smoothing.
  • Results show consistent robustness improvements without compromising accuracy in non-adversarial settings.
  • The approach is described as simple and computationally inexpensive, offering a practical defense for autonomous driving systems.

Abstract

Accurate and robust trajectory prediction is essential for safe and efficient autonomous driving, yet recent work has shown that even state-of-the-art prediction models are highly vulnerable to inputs being mildly perturbed by adversarial attacks. Although model vulnerabilities to such attacks have been studied, work on effective countermeasures remains limited. In this work, we develop and evaluate a new defense mechanism for trajectory prediction models based on randomized smoothing -- an approach previously applied successfully in other domains. We evaluate its ability to improve model robustness through a series of experiments that test different strategies of randomized smoothing. We show that our approach can consistently improve prediction robustness of multiple base trajectory prediction models in various datasets without compromising accuracy in non-adversarial settings. Our results demonstrate that randomized smoothing offers a simple and computationally inexpensive technique for mitigating adversarial attacks in trajectory prediction.