KVNN: Learnable Multi-Kernel Volterra Neural Networks

arXiv cs.CV / 4/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes KVNN, a kernelized Volterra Neural Network that captures higher-order compositional interactions using a learnable multi-kernel representation.
  • KVNN models different interaction orders with separate polynomial-kernel components that use compact, learnable centers, enabling an order-adaptive parameterization.
  • The architecture learns features through layered compositions where each layer has parallel branches for different polynomial orders, allowing KVNN filters to directly replace standard convolution kernels in existing networks.
  • Experiments on video action recognition and image denoising show a favorable performance–efficiency trade-off, with consistently lower parameters and GFLOPs while achieving competitive or improved accuracy.
  • The gains persist even when training from scratch without large-scale pretraining, supporting KVNN as a practical way to balance expressiveness and compute cost in modern deep learning models.

Abstract

Higher-order learning is fundamentally rooted in exploiting compositional features. It clearly hinges on enriching the representation by more elaborate interactions of the data which, in turn, tends to increase the model complexity of conventional large-scale deep learning models. In this paper, a kernelized Volterra Neural Network (kVNN) is proposed. The key to the achieved efficiency lies in using a learnable multi-kernel representation, where different interaction orders are modeled by distinct polynomial-kernel components with compact, learnable centers, yielding an order-adaptive parameterization. Features are learned by the composition of layers, each of which consists of parallel branches of different polynomial orders, enabling kVNN filters to directly replace standard convolutional kernels within existing architectures. The theoretical results are substantiated by experiments on two representative tasks: video action recognition and image denoising. The results demonstrate favorable performance-efficiency trade-offs: kVNN consistently yields reduced model (parameters) and computational (GFLOPs) complexity with competitive and often improved performance. These results are maintained even when trained from scratch without large-scale pretraining. In summary, we substantiate that structured kernelized higher-order layers offer a practical path to balancing expressivity and computational cost in modern deep networks.