Color-Encoded Illumination for High-Speed Volumetric Scene Reconstruction

arXiv cs.CV / 4/30/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • The paper addresses the bandwidth limits of conventional cameras (about 30–60 FPS) that make existing 3D dynamic scene reconstruction methods unsuitable for fast motion.
  • It proposes capturing high-speed volumetric reconstructions using unmodified low-speed cameras by illuminating the scene with a rapid sequential, color-coded pattern to encode temporal dynamics into spatial/color variations.
  • By leveraging simultaneous multi-view capture, the method enables recovery of a high-speed 3D volumetric representation without changing camera optics or adding mechanical components.
  • The authors introduce a dynamic Gaussian Splatting-based technique to decode the encoded temporal information from the captured images.
  • Experiments on simulated and real multi-camera setups demonstrate first-of-its-kind high-speed volumetric scene reconstructions.

Abstract

The task of capturing and rendering 3D dynamic scenes from 2D images has become increasingly popular in recent years. However, most conventional cameras are bandwidth-limited to 30-60 FPS, restricting these methods to static or slowly evolving scenes. While overcoming bandwidth limitations is difficult for general scenes, recent years have seen a flurry of computational imaging methods that yield high-speed videos using conventional cameras for specific applications (e.g., motion capture and particle image velocimetry). However, most of these methods require modifications to a camera's optics or the addition of mechanically moving components, limiting them to a single-view high-speed capture. Consequently, these methods cannot be readily used to capture a 3D representation of rapid scene motion. In this paper, we propose a novel method to capture and reconstruct a volumetric representation of a high-speed scene using only unaugmented low-speed cameras. Instead of modifying the hardware or optics of each individual camera, we encode high-speed scene dynamics by illuminating the scene with a rapid, sequential color-coded sequence. This results in simultaneous multi-view capture of the scene, where high-speed temporal information is encoded in the spatial intensity and color variations of the captured images. To construct a high-speed volumetric representation of the dynamic scene, we develop a novel dynamic Gaussian Splatting-based approach that decodes the temporal information from the images. We evaluate our approach on simulated scenes and real-world experiments using a multi-camera imaging setup, showing first-of-a-kind high-speed volumetric scene reconstructions.