Detection of Adversarial Attacks in Robotic Perception

arXiv cs.RO / 3/31/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper highlights that deep neural networks used for robotic semantic segmentation perform well yet remain vulnerable to adversarial attacks that can endanger safety-critical robotic applications.
  • It argues that adversarial robustness work done for image classification does not directly transfer to semantic segmentation in robotics, due to architectural and task-specific differences.
  • The study focuses on devising detection strategies tailored to robotic perception pipelines rather than only improving model accuracy.
  • It positions adversarial attack detection as a specialized research direction needed to make robotic perception systems more resilient in real-world settings.

Abstract

Deep Neural Networks (DNNs) achieve strong performance in semantic segmentation for robotic perception but remain vulnerable to adversarial attacks, threatening safety-critical applications. While robustness has been studied for image classification, semantic segmentation in robotic contexts requires specialized architectures and detection strategies.