Toward Real-World Adoption of Portrait Relighting via Hybrid Domain Knowledge Fusion

arXiv cs.CV / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Real-world portrait relighting adoption is slowed by domain gaps between datasets, differences in camera sensitivity, and high computational costs.
  • The paper proposes “Hybrid Domain Knowledge Fusion,” which combines synthetic OLAT (One-Light-at-a-Time) data with real-world datasets to train a compact model.
  • It uses domain-aware adaptation with specialized prior models, then applies augmented knowledge distillation to transfer multi-domain expertise into a lightweight student network.
  • Experiments report a 6x to 240x inference speedup while retaining state-of-the-art visual quality, and the training pipeline is supported by a large, high-fidelity synthetic dataset with varied ground-truth intrinsics.
  • Overall, the work targets practical deployment by explicitly addressing both data mismatch issues and runtime efficiency through hybrid data and distillation.

Abstract

The real-world adoption of portrait relighting is hindered by dataset domain gaps, camera sensitivity, and computational costs. We address these challenges with Hybrid Domain Knowledge Fusion, a paradigm that fuses the specialized strengths of synthetic, One-Light-at-A-Time (OLAT), and real-world datasets into a compact model. Our approach features specialized prior models hardened by domain-aware adaptation, followed by augmented knowledge distillation into a lightweight student model with multi-domain expertise. Our method demonstrates a 6x to 240x inference speedup while maintaining state-of-the-art (SOTA) visual quality in the experiments. Additionally, we construct a massive, high-fidelity synthetic dataset with diverse ground-truth intrinsics to support our training pipeline.