Event-based SLAM Benchmark for High-Speed Maneuvers

arXiv cs.RO / 4/28/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • The paper argues that current event-based SLAM/odometry methods, while reducing motion blur, have important gaps in handling arbitrary aggressive maneuvers, especially beyond limited assumptions like constant visibility or pure 3-DoF rotations.
  • It analyzes state-of-the-art event-based visual odometry and visual-inertial odometry approaches and identifies shortcomings in existing public datasets regarding the realism and coverage of aggressive motion and sensing conditions.
  • To address this, the authors introduce EvSLAM, an event-based benchmarking framework that defines high-speed maneuvers rigorously and includes diverse platforms, extreme lighting, and challenging motion patterns.
  • The framework also proposes a new evaluation metric intended to fairly measure the operational limits of event-based solutions and to reveal which architectures perform best under these stress cases.

Abstract

Event-based cameras are bio-inspired sensors with pixels that independently and asynchronously respond to brightness changes at microsecond resolution, offering the potential to handle visual tasks in high-speed maneuvering scenarios. Existing event-based approaches, although successful in mitigating motion blur caused by high-speed maneuvers, suffer from many limitations. Some of them highlight a success of pose tracking for a fronto-parallel fast shaking camera closed to the structure, while others assume pure (optionally aggressive) three-degree-of-freedom rotations. The former requires persistent local map visibility within the field of view (FOV), whereas the latter fails to generalize to six-degree-of-freedom (6-DoF) motions where both linear and angular velocities may be large. Consequently, current successes do not fully demonstrate that event-based state estimation under arbitrary aggressive maneuvers is a fully solved problem. To quantitatively assess the extent to which the potential of event cameras has been unlocked, we conduct a thorough analysis of state-of-the-art (SOTA) event-based visual odometry (VO)/visual-inertial odometry (VIO) methods and report shortcomings in current public datasets. Furthermore, we introduce a benchmarking framework for event-based state estimation, called EvSLAM, characterized by sufficient variation in data collection platforms, diverse extreme lighting scenarios, and a wide scope of challenging motion patterns under a clear and rigorous definition of high-speed maneuvers for mobile robots, along with a novel evaluation metric designed to fairly assess the operational limits of event-based solutions. This framework benchmarks state-of-the-art methods, yielding insights into optimal architectures and persistent challenges.