AI Navigate

EvoIQA - Explaining Image Distortions with Evolved White-Box Logic

arXiv cs.CV / 3/18/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • EvoIQA uses genetic programming to evolve explicit, human-readable formulas for image quality assessment, providing a white-box alternative to black-box deep learning models.
  • It leverages a rich terminal set drawn from VSI, VIF, FSIM, and HaarPSI metrics to map structural, chromatic, and information-theoretic degradations into mathematical equations.
  • The evolved models align well with human visual preferences and outperform traditional hand-crafted IQA metrics while achieving parity with state-of-the-art deep learning models like DB-CNN.
  • The approach demonstrates that interpretability and competitive performance can coexist in IQA, potentially influencing how quality metrics are designed in practice.

Abstract

Traditional Image Quality Assessment (IQA) metrics typically fall into one of two extremes: rigid, hand-crafted mathematical models or "black-box" deep learning architectures that completely lack interpretability. To bridge this gap, we propose EvoIQA, a fully explainable symbolic regression framework based on Genetic Programming that Evolves explicit, human-readable mathematical formulas for image quality assessment (IQA). Utilizing a rich terminal set from the VSI, VIF, FSIM, and HaarPSI metrics, our framework inherently maps structural, chromatic, and information-theoretic degradations into observable mathematical equations. Our results demonstrate that the evolved GP models consistently achieve strong alignment between the predictions and human visual preferences. Furthermore, they not only outperform traditional hand-crafted metrics but also achieve performance parity with complex, state-of-the-art deep learning models like DB-CNN, proving that we no longer have to sacrifice interpretability for state-of-the-art performance.