ReLi3D: Relightable Multi-view 3D Reconstruction with Disentangled Illumination
arXiv cs.CV / 3/23/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- ReLi3D introduces a unified end-to-end pipeline that reconstructs complete 3D geometry, spatially-varying materials, and environment illumination from sparse multi-view images in under one second.
- The method uses a transformer cross-conditioning architecture to fuse multi-view inputs, significantly improving material and illumination disentanglement versus single-view approaches.
- It features a two-path prediction strategy: one path for geometry/appearance and a second path for environment illumination derived from image backgrounds or object reflections.
- A differentiable Monte Carlo multiple importance sampling renderer enables end-to-end optimization of illumination within the training pipeline.
- A mixed-domain training protocol combining synthetic PBR data with real-world RGB captures yields generalizable results across geometry, materials, and illumination, enabling near-instantaneous relightable 3D assets.
Related Articles
How political censorship actually works inside Qwen, DeepSeek, GLM, and Yi: Ablation and behavioral results across 9 models
Reddit r/LocalLLaMA
Engenharia de Prompt: Por Que a Forma Como Você Pergunta Muda Tudo(Um guia introdutório)
Dev.to
The Obligor
Dev.to
The Markup
Dev.to
2026 年 AI 部落格變現完整攻略:從第一篇文章到月收入 $1000
Dev.to