Generative Anonymization in Event Streams

arXiv cs.CV / 4/15/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • Neuromorphic event camera deployment in public spaces creates privacy risks because event-to-video (E2V) models can reconstruct high-fidelity intensity images that may expose identities.
  • The paper proposes a generative anonymization framework that projects sparse event streams into an intermediate intensity representation, uses pretrained generative models to synthesize realistic but non-existent identities, and then re-encodes the result back into the neuromorphic event domain.
  • Experiments indicate the method blocks identity recovery from E2V reconstructions while maintaining the spatiotemporal/structural integrity needed for downstream perception tasks.
  • To support rigorous evaluation, the authors introduce a new synchronized real-world dataset of paired event streams and RGB captured with precise robotic trajectories as a benchmark for privacy-preserving neuromorphic vision research.

Abstract

Neuromorphic vision sensors offer low latency and high dynamic range, but their deployment in public spaces raises severe data protection concerns. Recent Event-to-Video (E2V) models can reconstruct high-fidelity intensity images from sparse event streams, inadvertently exposing human identities. Current obfuscation methods, such as masking or scrambling, corrupt the spatio-temporal structure, severely degrading data utility for downstream perception tasks. In this paper, to the best of our knowledge, we present the first generative anonymization framework for event streams to resolve this utility-privacy trade-off. By bridging the modality gap between asynchronous events and standard spatial generative models, our pipeline projects events into an intermediate intensity representation, leverages pretrained models to synthesize realistic, non-existent identities, and re-encodes the features back into the neuromorphic domain. Experiments demonstrate that our method reliably prevents identity recovery from E2V reconstructions while preserving the structural data integrity required for downstream vision tasks. Finally, to facilitate rigorous evaluation, we introduce a novel, synchronized real-world event and RGB dataset captured via precise robotic trajectories, providing a robust benchmark for future research in privacy-preserving neuromorphic vision.