GeRM: A Generative Rendering Model From Physically Realistic to Photorealistic

arXiv cs.CV / 4/13/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper identifies a “P2P gap” between physically-based rendering (PBR) and photorealistic rendering (PRR), where PBR’s physical correctness still depends on realistic geometry/materials to achieve true photorealism.
  • It proposes GeRM, a first multi-modal generative rendering model that unifies PBR and PRR by bridging the transition as a distribution transfer learned via a distribution transfer vector field (DTV Field).
  • GeRM uses physical representations (G-buffers) together with text prompts, along with a progressive incremental injection strategy, to generate controllable photorealistic images while navigating the continuum between fidelity and perceptual realism.
  • The approach builds an expert-guided paired dataset, P2P-50K, using a multi-agent VLM framework to create transfer pairs that supervise learning of the vector field.
  • A multi-condition ControlNet is introduced to learn and apply the DTV Field, progressively transforming PBR images into PRR outputs guided by G-buffers, prompts, and region-focused cues.

Abstract

For decades, Physically-Based Rendering (PBR) is the fundation of synthesizing photorealisitic images, and therefore sometimes roughly referred as Photorealistic Rendering (PRR). While PBR is indeed a mathematical simulation of light transport that guarantees physical reality, photorealism has additional reliance on the realistic digital model of geometry and appearance of the real world, leaving a barely explored gap from PBR to PRR (P2P). Consequently, the path toward photorealism faces a critical dilemma: the explicit simulation of PRR encumbered by unreachable realistic digital models for real-world existence, while implicit generation models sacrifice controllability and geometric consistency. Based on this insight, this paper presents the problem, data, and approach of mitigating P2P gap, followed by the first multi-modal generative rendering model, dubbed GeRM, to unify PBR and PRR. GeRM integrates physical attributes like G-buffers with text prompts, and progressive incremental injection to generate controllable photorealistic images, allowing users to fluidly navigate the continuum between strict physical fidelity and perceptual photorealism. Technically, we model the transition between PBR and PRR images as a distribution transfer and aim to learn a distribution transfer vector field (DTV Field) to guide this process. To define the learning objective, we first leverage a multi-agent VLM framework to construct an expert-guided pairwise P2P transfer dataset, named P2P-50K, where each paired sample in the dataset corresponds to a transfer vector in the DTV Field. Subsequently, we propose a multi-condition ControlNet to learn the DTV Field, which synthesizes PBR images and progressively transitions them into PRR images, guided by G-buffers, text prompts, and cues for enhanced regions.