AI Navigate

InstantHDR: Single-forward Gaussian Splatting for High Dynamic Range 3D Reconstruction

arXiv cs.CV / 3/13/2026

📰 NewsModels & Research

Key Points

  • InstantHDR proposes a feed-forward network that reconstructs HDR 3D scenes from uncalibrated multi-exposure LDR inputs in a single forward pass, reducing reliance on camera poses and dense point clouds.
  • The method combines geometry-guided appearance modeling for multi-exposure fusion with a meta-network that generalizes scene-specific tone mapping across different lighting and camera responses.
  • To enable generalizable HDR modeling, the authors build HDR-Pretrain, a pre-training dataset of 168 Blender-rendered scenes with diverse lighting and camera response functions.
  • Experimental results indicate comparable synthesis quality to optimization-based HDR methods while delivering substantial speedups (about 700x in the single-forward setting and ~20x with post-optimization).
  • The authors plan to release code, models, and datasets after peer review.

Abstract

High dynamic range (HDR) novel view synthesis (NVS) aims to reconstruct HDR scenes from multi-exposure low dynamic range (LDR) images. Existing HDR pipelines heavily rely on known camera poses, well-initialized dense point clouds, and time-consuming per-scene optimization. Current feed-forward alternatives overlook the HDR problem by assuming exposure-invariant appearance. To bridge this gap, we propose InstantHDR, a feed-forward network that reconstructs 3D HDR scenes from uncalibrated multi-exposure LDR collections in a single forward pass. Specifically, we design a geometry-guided appearance modeling for multi-exposure fusion, and a meta-network for generalizable scene-specific tone mapping. Due to the lack of HDR scene data, we build a pre-training dataset, called HDR-Pretrain, for generalizable feed-forward HDR models, featuring 168 Blender-rendered scenes, diverse lighting types, and multiple camera response functions. Comprehensive experiments show that our InstantHDR delivers comparable synthesis performance to the state-of-the-art optimization-based HDR methods while enjoying \sim700\times and \sim20\times reconstruction speed improvement with our single-forward and post-optimization settings. All code, models, and datasets will be released after the review process.