BEVPredFormer: Spatio-temporal Attention for BEV Instance Prediction in Autonomous Driving

arXiv cs.CV / 4/6/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces BEVPredFormer, a camera-only architecture for BEV instance prediction that jointly performs bird’s-eye-view segmentation and motion estimation across current and future frames for autonomous driving.
  • It addresses the challenge of efficiently modeling dense spatio-temporal information using attention-based temporal processing, a recurrent-free design, gated transformer layers, and divided spatio-temporal attention mechanisms.
  • The model uses an attention-based 3D projection of camera information and a difference-guided feature extraction module to strengthen temporal representations.
  • Experiments on the nuScenes dataset show BEVPredFormer is on par with or better than state-of-the-art approaches, with ablation studies validating the impact of each architectural component.

Abstract

A robust awareness of how dynamic scenes evolve is essential for Autonomous Driving systems, as they must accurately detect, track, and predict the behaviour of surrounding obstacles. Traditional perception pipelines that rely on modular architectures tend to suffer from cumulative errors and latency. Instance Prediction models provide a unified solution, performing Bird's-Eye-View segmentation and motion estimation across current and future frames using information directly obtained from different sensors. However, a key challenge in these models lies in the effective processing of the dense spatial and temporal information inherent in dynamic driving environments. This level of complexity demands architectures capable of capturing fine-grained motion patterns and long-range dependencies without compromising real-time performance. We introduce BEVPredFormer, a novel camera-only architecture for BEV instance prediction that uses attention-based temporal processing to improve temporal and spatial comprehension of the scene and relies on an attention-based 3D projection of the camera information. BEVPredFormer employs a recurrent-free design that incorporates gated transformer layers, divided spatio-temporal attention mechanisms, and multi-scale head tasks. Additionally, we incorporate a difference-guided feature extraction module that enhances temporal representations. Extensive ablation studies validate the effectiveness of each architectural component. When evaluated on the nuScenes dataset, BEVPredFormer was on par or surpassed State-Of-The-Art methods, highlighting its potential for robust and efficient Autonomous Driving perception.