LiFR-Seg: Anytime High-Frame-Rate Segmentation via Event-Guided Propagation

arXiv cs.CV / 3/24/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • LiFR-Seg introduces “Anytime Interframe Semantic Segmentation,” enabling dense segmentation at arbitrary times using only one past RGB frame plus asynchronous event-camera data instead of relying on low-frame-rate (LFR) video.
  • The method propagates deep semantic features through time via an uncertainty-aware warping process driven by an event-derived motion field with learned confidence to reduce feature degradation in highly dynamic scenes.
  • A temporal memory attention module is used to maintain semantic coherence over time, especially under motion and scene changes.
  • Experiments on the DSEC dataset (73.82% mIoU) and a new high-frequency synthetic benchmark (SHF-DSEC) show LFR-based performance that is statistically close to an HFR upper bound (within 0.09%).

Abstract

Dense semantic segmentation in dynamic environments is fundamentally limited by the low-frame-rate (LFR) nature of standard cameras, which creates critical perceptual gaps between frames. To solve this, we introduce Anytime Interframe Semantic Segmentation: a new task for predicting segmentation at any arbitrary time using only a single past RGB frame and a stream of asynchronous event data. This task presents a core challenge: how to robustly propagate dense semantic features using a motion field derived from sparse and often noisy event data, all while mitigating feature degradation in highly dynamic scenes. We propose LiFR-Seg, a novel framework that directly addresses these challenges by propagating deep semantic features through time. The core of our method is an uncertainty-aware warping process, guided by an event-driven motion field and its learned, explicit confidence. A temporal memory attention module further ensures coherence in dynamic scenarios. We validate our method on the DSEC dataset and a new high-frequency synthetic benchmark (SHF-DSEC) we contribute. Remarkably, our LFR system achieves performance (73.82% mIoU on DSEC) that is statistically indistinguishable from an HFR upper-bound (within 0.09%) that has full access to the target frame. This work presents a new, efficient paradigm for achieving robust, high-frame-rate perception with low-frame-rate hardware. Project Page: https://candy-crusher.github.io/LiFR_Seg_Proj/#; Code: https://github.com/Candy-Crusher/LiFR-Seg.git.