Know3D: Prompting 3D Generation with Knowledge from Vision-Language Models

arXiv cs.CV / 3/25/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Know3D, a framework that injects knowledge from multimodal large language models into 3D generation to make unseen regions more controllable.
  • It uses a VLM-guided diffusion approach where the VLM provides semantic understanding and guidance, while the diffusion model transfers that knowledge into the 3D reconstruction process.
  • Know3D specifically targets the ambiguity and lack of global structural priors in single-view 3D generation by improving how back-view (unobserved) regions are produced.
  • The authors report that the approach turns back-view hallucination—which is often stochastic and unreliable—into a semantically controllable generation pipeline aligned with user intent.
  • The work is positioned as a promising direction for future 3D generative models that better connect abstract instructions to geometric reconstruction.

Abstract

Recent advances in 3D generation have improved the fidelity and geometric details of synthesized 3D assets. However, due to the inherent ambiguity of single-view observations and the lack of robust global structural priors caused by limited 3D training data, the unseen regions generated by existing models are often stochastic and difficult to control, which may sometimes fail to align with user intentions or produce implausible geometries. In this paper, we propose Know3D, a novel framework that incorporates rich knowledge from multimodal large language models into 3D generative processes via latent hidden-state injection, enabling language-controllable generation of the back-view for 3D assets. We utilize a VLM-diffusion-based model, where the VLM is responsible for semantic understanding and guidance. The diffusion model acts as a bridge that transfers semantic knowledge from the VLM to the 3D generation model. In this way, we successfully bridge the gap between abstract textual instructions and the geometric reconstruction of unobserved regions, transforming the traditionally stochastic back-view hallucination into a semantically controllable process, demonstrating a promising direction for future 3D generation models.