Generative Event Pretraining with Foundation Model Alignment

arXiv cs.CV / 3/25/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes GEP (Generative Event Pretraining), a two-stage method to train event-based visual foundation models despite limited labeled event data and challenging sensor characteristics.
  • GEP first aligns an event encoder to a frozen image foundation model using a joint regression-contrastive objective to ground event representations in image semantics.
  • It then pretrains a transformer backbone autoregressively on mixed event-image sequences to learn event-specific temporal dynamics.
  • Experiments show GEP outperforms prior event pretraining approaches on downstream tasks such as object recognition, segmentation, and depth estimation, with improved cross-domain generalization.

Abstract

Event cameras provide robust visual signals under fast motion and challenging illumination conditions thanks to their microsecond latency and high dynamic range. However, their unique sensing characteristics and limited labeled data make it challenging to train event-based visual foundation models (VFMs), which are crucial for learning visual features transferable across tasks. To tackle this problem, we propose GEP (Generative Event Pretraining), a two-stage framework that transfers semantic knowledge learned from internet-scale image datasets to event data while learning event-specific temporal dynamics. First, an event encoder is aligned to a frozen VFM through a joint regression-contrastive objective, grounding event features in image semantics. Second, a transformer backbone is autoregressively pretrained on mixed event-image sequences to capture the temporal structure unique to events. Our approach outperforms state-of-the-art event pretraining methods on a diverse range of downstream tasks, including object recognition, segmentation, and depth estimation. Together, VFM-guided alignment and generative sequence modeling yield a semantically rich, temporally aware event model that generalizes robustly across domains.