DPU or GPU for Accelerating Neural Networks Inference -- Why not both? Split CNN Inference

arXiv cs.CV / 5/4/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes “Split CNN Inference,” partitioning convolutional neural network workloads between a DPU and a GPU to reduce edge-device latency for video/image streaming.
  • The approach runs the early CNN layers on the Versal VCK190’s DPU near the data source, then asynchronously pipelines the remaining layers on an NVIDIA RTX 2080 to limit overall latency.
  • It introduces a GNN-based partition index prediction method to automatically choose how to split layers across devices rather than requiring manual partitioning.
  • Experiments on models including LeNet-5, ResNet variants, VGG16, and MobileNetv2 show up to 2.48× lower latency than DPU-only and up to 3.37× lower latency than GPU-only execution, with the trained GNN achieving 96.27% split-accuracy.

Abstract

Video and image streaming on edge devices requires low latency. To address this, Neural Networks (NNs) are widely used, and prior work mainly focuses on accelerating them with single hardware units such as Graphics Processing Units (GPUs), Field Programmable Gate Arrays (FPGAs), and Deep Learning Processing Units (DPUs). However, further reductions in latency can be observed by combining these units. In this paper, partitioning CNN inference across DPU and GPU (Split CNN Inference) is proposed. The first partition runs on the AI engines (DPU) of a Versal VCK190, which consists of initial CNN layers processing the input images. The DPU processes the first partition near the source of the data. Pipelined asynchronously, a GPU runs the remaining layers. The GPU (NVIDIA RTX 2080) processes the second partition, albeit having reduced the data transfer between the data source (storage/camera) and the GPU. Furthermore, a Graph Neural Network (GNN)-based partition index prediction method is proposed to automate the partitioning of CNNs needed for the Split Inference. Well established models such as LeNet-5, ResNet18/50/101/152, VGG16, and MobileNetv2 are analyzed. Results demonstrate up to 2.48x latency improvement over DPU-only execution and up to 3.37x over GPU-only execution. The trained GNN model splits the layers between the appropriate devices with 96.27% accuracy.