Drive-Through 3D Vehicle Exterior Reconstruction via Dynamic-Scene SfM and Distortion-Aware Gaussian Splatting

arXiv cs.RO / 3/30/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses high-fidelity 3D reconstruction of vehicle exteriors in dynamic, cluttered dealership drive-through scenes, overcoming issues absent in static-scene photogrammetry such as moving targets, wide-angle lens distortion, specular paint, and non-rigid wheel motion.
  • It proposes an end-to-end pipeline with a two-pillar camera rig that isolates the moving vehicle using SAM 3 instance segmentation plus motion gating, explicitly masks non-rigid wheels to better satisfy epipolar geometry, and extracts correspondences on raw distorted 4K images using the RoMa v2 learned matcher.
  • The method integrates correspondences into a rig-aware SfM optimization with CAD-derived relative pose priors to reduce scale drift, improving geometric consistency for downstream rendering.
  • For high-quality visualization, it employs distortion-aware 3D Gaussian Splatting (3DGUT) with a stochastic Markov Chain Monte Carlo densification strategy aimed at rendering reflective surfaces.
  • Experiments on 25 real vehicles across 10 dealerships report PSNR 28.66 dB, SSIM 0.89, and LPIPS 0.21 on held-out views, yielding a 3.85 dB improvement over standard 3D Gaussian Splatting and claiming “inspection-grade” interactive models without studio capture.

Abstract

High-fidelity 3D reconstruction of vehicle exteriors improves buyer confidence in online automotive marketplaces, but generating these models in cluttered dealership drive-throughs presents severe technical challenges. Unlike static-scene photogrammetry, this setting features a dynamic vehicle moving against heavily cluttered, static backgrounds. This problem is further compounded by wide-angle lens distortion, specular automotive paint, and non-rigid wheel rotations that violate classical epipolar constraints. We propose an end-to-end pipeline utilizing a two-pillar camera rig. First, we resolve dynamic-scene ambiguities by coupling SAM 3 for instance segmentation with motion-gating to cleanly isolate the moving vehicle, explicitly masking out non-rigid wheels to enforce strict epipolar geometry. Second, we extract robust correspondences directly on raw, distorted 4K imagery using the RoMa v2 learned matcher guided by semantic confidence masks. Third, these matches are integrated into a rig-aware SfM optimization that utilizes CAD-derived relative pose priors to eliminate scale drift. Finally, we use a distortion-aware 3D Gaussian Splatting framework (3DGUT) coupled with a stochastic Markov Chain Monte Carlo (MCMC) densification strategy to render reflective surfaces. Evaluations on 25 real-world vehicles across 10 dealerships demonstrate that our full pipeline achieves a PSNR of 28.66 dB, an SSIM of 0.89, and an LPIPS of 0.21 on held-out views, representing a 3.85 dB improvement over standard 3D-GS, delivering inspection-grade interactive 3D models without controlled studio infrastructure.