IQ-LUT: interpolated and quantized LUT for efficient image super-resolution

arXiv cs.CV / 4/9/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces IQ-LUT, a method to make lookup-table-based image super-resolution more practical by cutting LUT size without sacrificing (and potentially improving) output quality.
  • It reduces the LUT index space by integrating interpolation and quantization into a single-input, multiple-output ECNN, addressing the storage bottleneck that grows exponentially with receptive field and bit-depth.
  • It uses residual learning to lessen sensitivity to LUT bit-depth, improving training stability and focusing reconstruction on fine-grained visual details.
  • Knowledge distillation guides a non-uniform quantization strategy to optimize quantization levels, shrinking storage further while compensating for quantization-induced quality loss.
  • Benchmarks reportedly show up to 50× lower storage costs versus baseline ECNN approaches while achieving superior super-resolution quality, supporting deployment on resource-constrained devices.

Abstract

Lookup table (LUT) methods demonstrate considerable potential in accelerating image super-resolution inference. However, pursuing higher image quality through larger receptive fields and bit-depth triggers exponential growth in the LUT's index space, creating a storage bottleneck that limits deployment on resource-constrained devices. We introduce IQ-LUT, which achieves a reduction in LUT size while simultaneously enhancing super-resolution quality. First, we integrate interpolation and quantization into the single-input, multiple-output ECNN, which dramatically reduces the index space and thereby the overall LUT size. Second, the integration of residual learning mitigates the dependence on LUT bit-depth, which facilitates training stability and prioritizes the reconstruction of fine-grained details for superior visual quality. Finally, guided by knowledge distillation, our non-uniform quantization process optimizes the quantization levels, thereby reducing storage while also compensating for quantization loss. Extensive benchmarking demonstrates our approach substantially reduces storage costs (by up to 50x compared to ECNN) while achieving superior super-resolution quality.