Practical exposure correction via compensation

arXiv cs.CV / 4/29/2026

💬 OpinionTools & Practical UsageModels & Research

Key Points

  • The paper introduces a practical exposure corrector (PEC) aimed at improving image exposure for computer-vision inputs captured under unsuitable lighting.
  • It addresses prior limitations by using an exposure-sensitive compensation model that enhances expressiveness for unknown scenes, alongside an exposure adversarial function to encourage scene-adaptive compensation.
  • The method employs a stable and robust iterative shrinkage scheme to avoid the complex inference pipelines common in earlier approaches.
  • Experiments on eight challenging datasets demonstrate strong adaptability to unseen environments and high efficiency, including 0.0009 seconds to process a 2K image on a GeForce RTX 2080Ti GPU.
  • The authors further validate PEC’s flexibility via analysis across multiple downstream vision tasks and provide code at https://rsliu.tech/PEC.

Abstract

In computer vision, correcting the exposure level is a fundamental task for enhancing the visual quality of observations with inappropriate lightness. However, existing methodologies tend to be impractical because they lack adaptability to unknown scenes due to restricted modeling patterns and struggle to achieve satisfactory efficiency due to complex computational flows. To tackle these challenges, we establish a new practical exposure corrector (PEC) that excels in both quality and efficiency. Specifically, to overcome the limited expressive power of existing modeling patterns, we build a general model with exposure-sensitive compensation to provide an intuitive modeling perspective. We also design a simple but effective exposure adversarial function to catalyze scene-adaptive compensation. Building on the aforementioned key concepts, we develop a stable and robust iterative shrinkage scheme, avoiding the complex inferences encountered in existing studies. Extensive experimental evaluations across eight challenging datasets showcase the strong adaptability of the developed model to unknown environments. The model offers impressive processing speed, requiring only 0.0009 s to handle a 2K image on a device equipped with a GeForce RTX 2080Ti GPU. Experimental analysis of different downstream vision tasks further verifies the flexibility of the model. The code is available at https://rsliu.tech/PEC.