AI Navigate

EventVGGT: Exploring Cross-Modal Distillation for Consistent Event-based Depth Estimation

arXiv cs.CV / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • EventVGGT is a novel framework designed for event-based monocular depth estimation that models event data as coherent video sequences rather than independent frames, addressing temporal inconsistency issues in prior methods.
  • The framework introduces a tri-level distillation strategy that includes Cross-Modal Feature Mixture to fuse RGB and event features, Spatio-Temporal Feature Distillation from the Visual Geometry Grounded Transformer (VGGT), and Temporal Consistency Distillation to enforce coherence across frames.
  • EventVGGT significantly improves depth estimation accuracy, reducing the absolute mean depth error by over 53% on the EventScape dataset and demonstrating strong zero-shot generalization on unseen datasets DENSE and MVSEC.
  • This work leverages advanced vision foundation models and multi-view geometric priors to enhance event-based 3D perception, particularly in challenging conditions involving high-speed motion and extreme lighting.
  • The research enhances robustness and temporal consistency in event-based depth prediction, which can benefit applications in robotics, autonomous driving, and other vision-based systems requiring reliable depth perception under dynamic conditions.

Computer Science > Computer Vision and Pattern Recognition

arXiv:2603.09385 (cs)
[Submitted on 10 Mar 2026]

Title:EventVGGT: Exploring Cross-Modal Distillation for Consistent Event-based Depth Estimation

View a PDF of the paper titled EventVGGT: Exploring Cross-Modal Distillation for Consistent Event-based Depth Estimation, by Yinrui Ren and 10 other authors
View PDF HTML (experimental)
Abstract:Event cameras offer superior sensitivity to high-speed motion and extreme lighting, making event-based monocular depth estimation a promising approach for robust 3D perception in challenging conditions. However, progress is severely hindered by the scarcity of dense depth annotations. While recent annotation-free approaches mitigate this by distilling knowledge from Vision Foundation Models (VFMs), a critical limitation persists: they process event streams as independent frames. By neglecting the inherent temporal continuity of event data, these methods fail to leverage the rich temporal priors encoded in VFMs, ultimately yielding temporally inconsistent and less accurate depth predictions. To address this, we introduce EventVGGT, a novel framework that explicitly models the event stream as a coherent video sequence. To the best of our knowledge, we are the first to distill spatio-temporal and multi-view geometric priors from the Visual Geometry Grounded Transformer (VGGT) into the event domain. We achieve this via a comprehensive tri-level distillation strategy: (i) Cross-Modal Feature Mixture (CMFM) bridges the modality gap at the output level by fusing RGB and event features to generate auxiliary depth predictions; (ii) Spatio-Temporal Feature Distillation (STFD) distills VGGT's powerful spatio-temporal representations at the feature level; and (iii) Temporal Consistency Distillation (TCD) enforces cross-frame coherence at the temporal level by aligning inter-frame depth changes. Extensive experiments demonstrate that EventVGGT consistently outperforms existing methods -- reducing the absolute mean depth error at 30m by over 53\% on EventScape (from 2.30 to 1.06) -- while exhibiting robust zero-shot generalization on the unseen DENSE and MVSEC datasets.
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:2603.09385 [cs.CV]
  (or arXiv:2603.09385v1 [cs.CV] for this version)
  https://doi.org/10.48550/arXiv.2603.09385
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Jinjing Zhu [view email]
[v1] Tue, 10 Mar 2026 08:57:51 UTC (4,554 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled EventVGGT: Exploring Cross-Modal Distillation for Consistent Event-based Depth Estimation, by Yinrui Ren and 10 other authors
  • View PDF
  • HTML (experimental)
  • TeX Source
Current browse context:
cs.CV
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.