Edit-aware RAW Reconstruction

arXiv cs.CV / 4/27/2026

💬 OpinionTools & Practical UsageModels & Research

Key Points

  • The paper introduces an edit-aware plug-and-play loss function for RAW reconstruction from edited camera outputs, addressing the limitations of pixel-wise RAW fidelity methods under diverse rendering and editing styles.
  • It uses a modular, differentiable ISP that simulates realistic photofinishing pipelines, with ISP parameters randomly sampled during training to reflect practical variations in camera processing.
  • The training objective is computed in sRGB space by rendering both ground-truth and reconstructed RAWs through the differentiable ISP, making the recovered RAWs more robust to real post-processing.
  • Experiments show up to a 1.5–2 dB PSNR improvement in sRGB reconstruction quality across multiple editing conditions, and additional gains when combined with metadata-assisted RAW reconstruction with edit-specific fine-tuning.
  • The authors position the approach as broadly applicable, aiming to improve edit fidelity and rendering flexibility across existing RAW reconstruction frameworks for consumer imaging workflows.

Abstract

Users frequently edit camera images post-capture to achieve their preferred photofinishing style. While editing in the RAW domain provides greater accuracy and flexibility, most edits are performed on the camera's display-referred output (e.g., 8-bit sRGB JPEG) since RAW images are rarely stored. Existing RAW reconstruction methods can recover RAW data from sRGB images, but these approaches are typically optimized for pixel-wise RAW reconstruction fidelity and tend to degrade under diverse rendering styles and editing operations. We introduce a plug-and-play, edit-aware loss function that can be integrated into any existing RAW reconstruction framework to make the recovered RAWs more robust to different rendering styles and edits. Our loss formulation incorporates a modular, differentiable image signal processor (ISP) that simulates realistic photofinishing pipelines with tunable parameters. During training, parameters for each ISP module are randomly sampled from carefully designed distributions that model practical variations in real camera processing. The loss is then computed in sRGB space between ground-truth and reconstructed RAWs rendered through this differentiable ISP. Incorporating our loss improves sRGB reconstruction quality by up to 1.5-2 dB PSNR across various editing conditions. Moreover, when applied to metadata-assisted RAW reconstruction methods, our approach enables fine-tuning for target edits, yielding further gains. Since photographic editing is the primary motivation for RAW reconstruction in consumer imaging, our simple yet effective loss function provides a general mechanism for enhancing edit fidelity and rendering flexibility across existing methods.