AI Navigate

Adversarial Reinforcement Learning for Detecting False Data Injection Attacks in Vehicular Routing

arXiv cs.AI / 3/13/2026

💬 OpinionModels & Research

Key Points

  • The paper frames false data injection attacks on vehicular routing as a strategically zero-sum game between an attacker and a defender.
  • It proposes a multi-agent reinforcement learning approach to compute a Nash equilibrium and an optimal detection strategy based on observed edge travel times.
  • The method provides a worst-case bound on total travel time even under attack, demonstrating robustness in transportation networks.
  • Experimental results show approximate equilibrium policies and that the approach significantly outperforms baselines for both attacker and defender, enhancing resilience of routing systems.

Abstract

In modern transportation networks, adversaries can manipulate routing algorithms using false data injection attacks, such as simulating heavy traffic with multiple devices running crowdsourced navigation applications, to mislead vehicles toward suboptimal routes and increase congestion. To address these threats, we formulate a strategically zero-sum game between an attacker, who injects such perturbations, and a defender, who detects anomalies based on the observed travel times of network edges. We propose a computational method based on multi-agent reinforcement learning to compute a Nash equilibrium of this game, providing an optimal detection strategy, which ensures that total travel time remains within a worst-case bound, even in the presence of an attack. We present an extensive experimental evaluation that demonstrates the robustness and practical benefits of our approach, providing a powerful framework to improve the resilience of transportation networks against false data injection. In particular, we show that our approach yields approximate equilibrium policies and significantly outperforms baselines for both the attacker and the defender.