NaviSplit: Dynamic Multi-Branch Split DNNs for Efficient Distributed Autonomous Navigation

arXiv cs.RO / 4/10/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces NaviSplit, a lightweight distributed autonomous navigation framework that splits a deep neural network into a vehicle-executed head and an edge-server-executed tail to reduce on-board compute and communication demands.
  • A neural gate dynamically selects among multiple head model branches to minimize channel usage while still supporting navigation inference efficiently.
  • The approach uses a monocular RGB-to-2D depth-map perception pipeline implemented and tested with Microsoft AirSim, then transmits only compacted perception outputs to an edge device.
  • Experiments report 72–81% depth extraction accuracy with very small transmissions (1.2–18 KB), and with the neural gate the system slightly improves navigation accuracy by ~0.3% versus a larger static network while cutting data rate by about 95%.
  • The authors claim it is the first example (to their knowledge) of dynamic multi-branch split DNNs specifically applied to autonomous navigation for lightweight UAVs.

Abstract

Lightweight autonomous unmanned aerial vehicles (UAV) are emerging as a central component of a broad range of applications. However, autonomous navigation necessitates the implementation of perception algorithms, often deep neural networks (DNN), that process the input of sensor observations, such as that from cameras and LiDARs, for control logic. The complexity of such algorithms clashes with the severe constraints of these devices in terms of computing power, energy, memory, and execution time. In this paper, we propose NaviSplit, the first instance of a lightweight navigation framework embedding a distributed and dynamic multi-branched neural model. At its core is a DNN split at a compression point, resulting in two model parts: (1) the head model, that is executed at the vehicle, which partially processes and compacts perception from sensors; and (2) the tail model, that is executed at an interconnected compute-capable device, which processes the remainder of the compacted perception and infers navigation commands. Different from prior work, the NaviSplit framework includes a neural gate that dynamically selects a specific head model to minimize channel usage while efficiently supporting the navigation network. In our implementation, the perception model extracts a 2D depth map from a monocular RGB image captured by the drone using the robust simulator Microsoft AirSim. Our results demonstrate that the NaviSplit depth model achieves an extraction accuracy of 72-81% while transmitting an extremely small amount of data (1.2-18 KB) to the edge server. When using the neural gate, as utilized by NaviSplit, we obtain a slightly higher navigation accuracy as compared to a larger static network by 0.3% while significantly reducing the data rate by 95%. To the best of our knowledge, this is the first exemplar of dynamic multi-branched model based on split DNNs for autonomous navigation.