SplAttN: Bridging 2D and 3D with Gaussian Soft Splatting and Attention for Point Cloud Completion

arXiv cs.CV / 5/5/2026

📰 NewsModels & Research

Key Points

  • The paper argues that standard hard projection in multi-modal point cloud completion can sever the connection between modalities, causing a failure mode the authors call Cross-Modal Entropy Collapse.
  • SplAttN addresses this by replacing hard projection with Differentiable Gaussian Splatting to generate a dense, continuous image-plane representation that preserves cross-modal learnability and enables better gradient flow.
  • Extensive experiments reportedly achieve state-of-the-art results on PCN and ShapeNet-55/34 point cloud completion benchmarks.
  • Using KITTI as a real-world stress test, the authors’ counter-factual evaluation suggests competing baselines degrade into unimodal template retrievers, while SplAttN remains reliably dependent on visual cues even when visual information is removed.
  • The authors provide the implementation code publicly on GitHub.

Abstract

Although multi-modal learning has advanced point cloud completion, the theoretical mechanisms remain unclear. Recent works attribute success to the connection between modalities, yet we identify that standard hard projection severs this connection: projecting a sparse point cloud onto the image plane yields an extremely sparse support, which hinders visual prior propagation, a failure mode we term Cross-Modal Entropy Collapse. To address this practical limitation, we propose SplAttN, which replaces hard projection with Differentiable Gaussian Splatting to produce a dense, continuous image-plane representation. By reformulating projection as continuous density estimation, SplAttN avoids collapsed sparse support, facilitates gradient flow, and improves cross-modal connection learnability. Extensive experiments show that SplAttN achieves state-of-the-art performance on PCN and ShapeNet-55/34. Crucially, we utilize the real-world KITTI benchmark as a stress test for multi-modal reliance. Counter-factual evaluation reveals that while baselines degenerate into unimodal template retrievers insensitive to visual removal, SplAttN maintains a robust dependency on visual cues, validating that our method establishes an effective cross-modal connection. Code is available at https://github.com/zay002/SplAttN.

SplAttN: Bridging 2D and 3D with Gaussian Soft Splatting and Attention for Point Cloud Completion | AI Navigate