AI Navigate

EventVGGT: 一貫したイベントベース深度推定のためのクロスモーダル蒸留の探求

arXiv cs.CV / 2026/3/11

Ideas & Deep AnalysisModels & Research

要点

  • EventVGGTは、従来の手法で問題となっていた時間的一貫性の問題に対処するため、イベントデータを独立したフレームではなく、一貫したビデオシーケンスとしてモデル化する、イベントベース単眼深度推定のための新しいフレームワークです。
  • 本フレームワークは、RGBとイベント特徴を融合するクロスモーダル特徴混合、Visual Geometry Grounded Transformer(VGGT)からの時空間特徴蒸留、およびフレーム間の一貫性を強制する時間的一貫性蒸留を含む三段階蒸留戦略を導入しています。
  • EventVGGTはEventScapeデータセットでの絶対平均深度誤差を53%以上削減し、未使用のDENSEおよびMVSECデータセットでの強力なゼロショット一般化を示すことで、深度推定精度を大幅に向上させました。
  • 本研究は、先進的なビジョン基盤モデルとマルチビュー幾何学的プライアを活用し、高速運動や極端な照明などの厳しい条件下でもイベントベースの3D知覚を強化します。
  • 本研究は、ロボティクス、自律走行、及び動的条件下での信頼できる深度認識を必要とするその他のビジョンベースシステムに恩恵をもたらす、イベントベース深度予測の堅牢性と時間的一貫性を向上させます。

Computer Science > Computer Vision and Pattern Recognition

arXiv:2603.09385 (cs)
[Submitted on 10 Mar 2026]

Title:EventVGGT: Exploring Cross-Modal Distillation for Consistent Event-based Depth Estimation

View a PDF of the paper titled EventVGGT: Exploring Cross-Modal Distillation for Consistent Event-based Depth Estimation, by Yinrui Ren and 10 other authors
View PDF HTML (experimental)
Abstract:Event cameras offer superior sensitivity to high-speed motion and extreme lighting, making event-based monocular depth estimation a promising approach for robust 3D perception in challenging conditions. However, progress is severely hindered by the scarcity of dense depth annotations. While recent annotation-free approaches mitigate this by distilling knowledge from Vision Foundation Models (VFMs), a critical limitation persists: they process event streams as independent frames. By neglecting the inherent temporal continuity of event data, these methods fail to leverage the rich temporal priors encoded in VFMs, ultimately yielding temporally inconsistent and less accurate depth predictions. To address this, we introduce EventVGGT, a novel framework that explicitly models the event stream as a coherent video sequence. To the best of our knowledge, we are the first to distill spatio-temporal and multi-view geometric priors from the Visual Geometry Grounded Transformer (VGGT) into the event domain. We achieve this via a comprehensive tri-level distillation strategy: (i) Cross-Modal Feature Mixture (CMFM) bridges the modality gap at the output level by fusing RGB and event features to generate auxiliary depth predictions; (ii) Spatio-Temporal Feature Distillation (STFD) distills VGGT's powerful spatio-temporal representations at the feature level; and (iii) Temporal Consistency Distillation (TCD) enforces cross-frame coherence at the temporal level by aligning inter-frame depth changes. Extensive experiments demonstrate that EventVGGT consistently outperforms existing methods -- reducing the absolute mean depth error at 30m by over 53\% on EventScape (from 2.30 to 1.06) -- while exhibiting robust zero-shot generalization on the unseen DENSE and MVSEC datasets.
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:2603.09385 [cs.CV]
  (or arXiv:2603.09385v1 [cs.CV] for this version)
  https://doi.org/10.48550/arXiv.2603.09385
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Jinjing Zhu [view email]
[v1] Tue, 10 Mar 2026 08:57:51 UTC (4,554 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled EventVGGT: Exploring Cross-Modal Distillation for Consistent Event-based Depth Estimation, by Yinrui Ren and 10 other authors
  • View PDF
  • HTML (experimental)
  • TeX Source
Current browse context:
cs.CV
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.