Unposed-to-3D: Learning Simulation-Ready Vehicles from Real-World Images
arXiv cs.CV / 4/22/2026
📰 NewsDeveloper Stack & InfrastructureModels & Research
Key Points
- The paper addresses a key gap in 3D vehicle generation by reducing the domain gap between synthetic training data and real-world driving images.
- Unposed-to-3D reconstructs simulation-ready 3D vehicle models from image-only supervision by using a two-stage pipeline that first learns from posed images with known camera parameters, then removes camera supervision for unposed images.
- A camera prediction head estimates pose from unposed images, and differentiable rendering provides self-supervised photometric feedback to drive learning of 3D geometry.
- To make outputs usable in simulations and digital twins, the method adds a scale-aware module for real-world sizing and a harmonization module to align lighting and appearance with the target driving scene.
- Experiments indicate the approach produces realistic, pose-consistent, and appearance-harmonized 3D vehicles that integrate better into driving scenes than prior methods trained on synthetic data.
Related Articles

Enterprise AI Governance Has Shifted from Policy to Execution
Dev.to

Rethinking CNN Models for Audio Classification
Dev.to
v0.20.0rc1
vLLM Releases

Build-in-Public: What I Learned Building an AI Image SaaS
Dev.to
I built my own event bus for a sustainability app — here's what I learned about agent automation using OpenClaw
Dev.to