AI Navigate

ARCHE: Autoregressive Residual Compression with Hyperprior and Excitation

arXiv cs.CV / 3/12/2026

💬 OpinionModels & Research

Key Points

  • ARCHE is an end-to-end learned image compression framework that balances modeling accuracy and computational efficiency by unifying hierarchical, spatial, and channel priors within a single probabilistic model.
  • It achieves state-of-the-art rate-distortion performance, reducing BD-Rate by approximately 48% versus Balle et al., 30% versus the channel-wise autoregressive model of Minnen & Singh, and 5% against the VVC Intra codec on the Kodak benchmark.
  • The approach avoids recurrent or transformer components and uses adaptive feature recalibration and residual refinement, with 95M parameters and about 222 ms per image, supporting practical deployment.
  • Visual comparisons indicate sharper textures and improved color fidelity at low bitrates, illustrating effective entropy modeling through efficient convolutional design.

Abstract

Recent progress in learning-based image compression has demonstrated that end-to-end optimization can substantially outperform traditional codecs by jointly learning compact latent representations and probabilistic entropy models. However, many existing approaches achieve high rate-distortion efficiency at the expense of increased computational cost and limited parallelism. This paper presents ARCHE - Autoregressive Residual Compression with Hyperprior and Excitation, an end-to-end learned image compression framework that balances modeling accuracy and computational efficiency. The proposed architecture unifies hierarchical, spatial, and channel-based priors within a single probabilistic framework, capturing both global and local dependencies in the latent representation of the image, while employing adaptive feature recalibration and residual refinement to enhance latent representation quality. Without relying on recurrent or transformer-based components, ARCHE attains state-of-the-art rate-distortion efficiency: it reduces the BD-Rate by approximately 48% relative to the commonly used benchmark model of Balle et al., 30% relative to the channel-wise autoregressive model of Minnen & Singh and 5% against the VVC Intra codec on the Kodak benchmark dataset. The framework maintains computational efficiency with 95M parameters and 222ms running time per image. Visual comparisons confirm sharper textures and improved color fidelity, particularly at lower bit rates, demonstrating that accurate entropy modeling can be achieved through efficient convolutional designs suitable for practical deployment.