SAND: Spatially Adaptive Network Depth for Fast Sampling of Neural Implicit Surfaces

arXiv cs.CV / 4/30/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses a key bottleneck in neural implicit geometry: evaluating implicit networks can be computationally expensive, limiting practical deployment.
  • It finds that representation accuracy needs decrease when query points are farther from the target surface, and that difficulty also varies spatially even on the same iso-surface due to local geometric complexity.
  • The proposed SAND framework uses a volumetric network-depth map plus a tailored multi-layer perceptron (T-MLP) to adaptively stop computation per spatial region, avoiding wasted evaluations.
  • By attaching an output “tail” branch to each hidden layer, SAND learns implicit functions (e.g., signed distance functions) while allowing early termination when sufficient accuracy is reached.
  • Experiments reported in the paper show that SAND can substantially speed up inference-time queries for implicit neural surface representations while maintaining high-fidelity results.

Abstract

Implicit neural representations are powerful for geometric modeling, but their practical use is often limited by the high computational cost of network evaluations. We observe that implicit representations require progressively lower accuracy as query points move farther from the target surface, and that even within the same iso-surface, representation difficulty varies spatially with local geometric complexity. However, conventional neural implicit models evaluate all query points with the same network depth and computational cost, ignoring this spatial variation and thereby incurring substantial computational waste. Motivated by this observation, we propose an efficient neural implicit geometry representation framework with spatially adaptive network depth (SAND). SAND leverages a volumetric network-depth map together with a tailed multi-layer perceptron (T-MLP) to model implicit representation. The volumetric depth map records, for each spatial region, the network depth required to achieve sufficient accuracy, while the T-MLP is a modified MLP designed to learn implicit functions such as signed distance functions, where an output branch, referred to as a tail, is attached to each hidden layer. This design allows network evaluation to terminate adaptively without traversing the full network and directs computational resources to geometrically important and complex regions, improving efficiency while preserving high-fidelity representations. Extensive experimental results demonstrate that our approach can significantly improve the inference-time query speed of implicit neural representations.