TinyNeRV: Compact Neural Video Representations via Capacity Scaling, Distillation, and Low-Precision Inference

arXiv cs.CV / 4/13/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper introduces TinyNeRV, a systematic study of very compact Neural Representations for Videos (NeRV) aimed at resource-constrained and real-time deployment.
  • It proposes two lightweight variants, NeRV-T and NeRV-T+, and evaluates how aggressive capacity reduction impacts reconstruction quality, computation, and decoding throughput across multiple video datasets.
  • To improve fidelity without raising inference cost, the authors explore knowledge distillation using frequency-aware focal supervision for low-capacity models.
  • The study also assesses robustness under low-precision inference via both post-training quantization and quantization-aware training.
  • Results show that well-designed tiny NeRV architectures can substantially cut parameter count, compute cost, and memory while maintaining favorable quality-efficiency trade-offs, with an official implementation released on GitHub.

Abstract

Implicit neural video representations encode entire video sequences within the parameters of a neural network and enable constant time frame reconstruction. Recent work on Neural Representations for Videos (NeRV) has demonstrated competitive reconstruction performance while avoiding the sequential decoding process of conventional video codecs. However, most existing studies focus on moderate or high capacity models, leaving the behavior of extremely compact configurations required for constrained environments insufficiently explored. This paper presents a systematic study of tiny NeRV architectures designed for efficient deployment. Two lightweight configurations, NeRV-T and NeRV-T+, are introduced and evaluated across multiple video datasets in order to analyze how aggressive capacity reduction affects reconstruction quality, computational complexity, and decoding throughput. Beyond architectural scaling, the work investigates strategies for improving the performance of compact models without increasing inference cost. Knowledge distillation with frequency-aware focal supervision is explored to enhance reconstruction fidelity in low-capacity networks. In addition, the impact of lowprecision inference is examined through both post training quantization and quantization aware training to study the robustness of tiny models under reduced numerical precision. Experimental results demonstrate that carefully designed tiny NeRV variants can achieve favorable quality efficiency trade offs while substantially reducing parameter count, computational cost, and memory requirements. These findings provide insight into the practical limits of compact neural video representations and offer guidance for deploying NeRV style models in resource constrained and real-time environments. The official implementation is available at https: //github.com/HannanAkhtar/TinyNeRV-Implementation.