Preference-Guided Debiasing for No-Reference Enhancement Image Quality Assessment

arXiv cs.CV / 3/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper identifies that no-reference image quality assessment (NR-IQA) for enhanced images overfits to enhancement-specific patterns, hindering cross-algorithm generalization.
  • It proposes a preference-guided debiasing framework that learns an enhancement-preference embedding space via supervised contrastive learning to cluster images by similar enhancement styles.
  • The method estimates and removes an enhancement-induced nuisance component from the raw quality representation before regression, followed by a two-stage training strategy for stability.
  • Experiments on public EIQA benchmarks show improved robustness and cross-algorithm generalization, reducing algorithm-induced representation bias compared with existing approaches.

Abstract

Current no-reference image quality assessment (NR-IQA) models for enhanced images often struggle to generalize, as they tend to overfit to the distinct patterns of specific enhancement algorithms rather than evaluating genuine perceptual quality. To address this issue, we propose a preference-guided debiasing framework for no-reference enhancement image quality assessment (EIQA). Specifically, we first learn a continuous enhancement-preference embedding space using supervised contrastive learning, where images generated by similar enhancement styles are encouraged to have closer representations. Based on this, we further estimate the enhancement-induced nuisance component contained in the raw quality representation and remove it before quality regression. In this way, the model is guided to focus on algorithm-invariant perceptual quality cues instead of enhancement-specific visual fingerprints. To facilitate stable optimization, we adopt a two-stage training strategy that first learns the enhancement-preference space and then performs debiased quality prediction. Extensive experiments on public EIQA benchmarks demonstrate that the proposed method effectively mitigates algorithm-induced representation bias and achieves superior robustness and cross-algorithm generalization compared with existing approaches.