Efficient INT8 Single-Image Super-Resolution via Deployment-Aware Quantization and Teacher-Guided Training
arXiv cs.CV / 4/23/2026
📰 NewsDeveloper Stack & InfrastructureModels & Research
Key Points
- The paper proposes a deployment-aware INT8 quantized single-image super-resolution (x3 SR) framework that minimizes inference complexity by performing most computation in low-resolution space and using a lightweight re-parameterizable backbone with PixelShuffle reconstruction.
- It introduces a three-stage training pipeline that progressively improves reconstruction quality using spatial supervision, Charbonnier and DCT-domain losses, and confidence-weighted distillation from a Mamba-based teacher.
- The method applies quantization-aware training directly on the fused deploy graph, further stabilizing INT8 quantization via weight clipping and BatchNorm recalibration.
- On the MAI 2026 Quantized 4K Image Super-Resolution Challenge test set, the authors report 29.79 dB PSNR and 0.8634 SSIM, with a final mobile target INT8 submission score of 1.8.
- Ablation results indicate that teacher-guided supervision materially improves dynamic INT8 TFLite reconstruction performance, and that the fixed-shape deployable INT8 TFLite artifact achieves the highest reported metrics in the study.
Related Articles

Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans
Dev.to

Elevating Austria: Google invests in its first data center in the Alps.
Google Blog

OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Dev.to

GPT Image 2 Subject-Lock Editing: A Practical Guide to input_fidelity
Dev.to

AI Tutor That Works Offline — Study Anywhere with EaseLearn AI
Dev.to