LAGS: Low-Altitude Gaussian Splatting with Groupwise Heterogeneous Graph Learning

arXiv cs.CV / 4/21/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper introduces LAGS (Low-Altitude Gaussian Splatting), which reconstructs 3D scenes by aggregating aerial images from distributed drones, but highlights inefficiency in current resource allocation due to ignoring viewpoint-induced image diversity.
  • It proposes GW-HGNN (groupwise heterogeneous graph neural network) to allocate drone image transmissions by modeling how different image groups non-uniformly contribute to reconstruction, balancing reconstruction fidelity against transmission cost.
  • The method reframes LAGS losses and communication constraints as graph learning costs and performs dual-level message passing to learn the allocation policy.
  • Experiments on real-world LAGS datasets show GW-HGNN achieves significantly better rendering quality than existing benchmarks on PSNR, SSIM, and LPIPS.
  • The approach also cuts computational latency by roughly 100× versus the MOSEK solver, enabling millisecond-level inference for real-time deployment.

Abstract

Low-altitude Gaussian splatting (LAGS) facilitates 3D scene reconstruction by aggregating aerial images from distributed drones. However, as LAGS prioritizes maximizing reconstruction quality over communication throughput, existing low-altitude resource allocation schemes become inefficient. This inefficiency stems from their failure to account for image diversity introduced by varying viewpoints. To fill this gap, we propose a groupwise heterogeneous graph neural network (GW-HGNN) for LAGS resource allocation. GW-HGNN explicitly models the non-uniform contribution of different image groups to the reconstruction process, thus automatically balancing data fidelity and transmission cost. The key insight of GW-HGNN is to transform LAGS losses and communication constraints into graph learning costs for dual-level message passing. Experiments on real-world LAGS datasets demonstrate that GW-HGNN significantly outperforms state-of-the-art benchmarks across key rendering metrics, including PSNR, SSIM, and LPIPS. Furthermore, GW-HGNN reduces computational latency by approximately 100x compared to the widely-used MOSEK solver, achieving millisecond-level inference suitable for real-time deployment.