Unposed-to-3D: Learning Simulation-Ready Vehicles from Real-World Images

arXiv cs.CV / 4/22/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper addresses a key gap in 3D vehicle generation by reducing the domain gap between synthetic training data and real-world driving images.
  • Unposed-to-3D reconstructs simulation-ready 3D vehicle models from image-only supervision by using a two-stage pipeline that first learns from posed images with known camera parameters, then removes camera supervision for unposed images.
  • A camera prediction head estimates pose from unposed images, and differentiable rendering provides self-supervised photometric feedback to drive learning of 3D geometry.
  • To make outputs usable in simulations and digital twins, the method adds a scale-aware module for real-world sizing and a harmonization module to align lighting and appearance with the target driving scene.
  • Experiments indicate the approach produces realistic, pose-consistent, and appearance-harmonized 3D vehicles that integrate better into driving scenes than prior methods trained on synthetic data.

Abstract

Creating realistic and simulation-ready 3D assets is crucial for autonomous driving research and virtual environment construction. However, existing 3D vehicle generation methods are often trained on synthetic data with significant domain gaps from real-world distributions. The generated models often exhibit arbitrary poses and undefined scales, resulting in poor visual consistency when integrated into driving scenes. In this paper, we present Unposed-to-3D, a novel framework that learns to reconstruct 3D vehicles from real-world driving images using image-only supervision. Our approach consists of two stages. In the first stage, we train an image-to-3D reconstruction network using posed images with known camera parameters. In the second stage, we remove camera supervision and use a camera prediction head that directly estimates the camera parameters from unposed images. The predicted pose is then used for differentiable rendering to provide self-supervised photometric feedback, enabling the model to learn 3D geometry purely from unposed images. To ensure simulation readiness, we further introduce a scale-aware module to predict real-world size information, and a harmonization module that adapts the generated vehicles to the target driving scene with consistent lighting and appearance. Extensive experiments demonstrate that Unposed-to-3D effectively reconstructs realistic, pose-consistent, and harmonized 3D vehicle models from real-world images, providing a scalable path toward creating high-quality assets for driving scene simulation and digital twin environments.