Topo-ADV: Generating Topology-Driven Imperceptible Adversarial Point Clouds

arXiv cs.CV / 4/14/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents Topo-ADV, a new topology-driven method for generating adversarial point clouds that exploits the homological (topological) structure as a vulnerability surface for 3D deep learning models.
  • Topo-ADV uses an end-to-end differentiable framework that incorporates persistent homology into the optimization, embedding persistence diagrams via differentiable topological representations.
  • The attack jointly optimizes a topology-divergence loss (to alter persistence), a misclassification objective, and geometric imperceptibility constraints to keep perturbations visually plausible.
  • Experiments on benchmarks (ModelNet40, ShapeNet Part, ScanObjectNN) with PointNet and DGCNN report attack success rates up to 100% while maintaining geometric indistinguishability and improving over prior state-of-the-art methods on perceptibility metrics.

Abstract

Deep neural networks for 3D point cloud understanding have achieved remarkable success in object classification and recognition, yet recent work shows that these models remain highly vulnerable to adversarial perturbations. Existing 3D attacks predominantly manipulate geometric properties such as point locations, curvature, or surface structure, implicitly assuming that preserving global shape fidelity preserves semantic content. In this work, we challenge this assumption and introduce the first topology-driven adversarial attack for point cloud deep learning. Our key insight is that the homological structure of a 3D object constitutes a previously unexplored vulnerability surface. We propose Topo-ADV, an end-to-end differentiable framework that incorporates persistent homology as an explicit optimization objective, enabling gradient-based manipulation of topological features during adversarial example generation. By embedding persistence diagrams through differentiable topological representations, our method jointly optimizes (i) a topology divergence loss that alters persistence, (ii) a misclassification objective, and (iii) geometric imperceptibility constraints that preserve visual plausibility. Experiments demonstrate that subtle topology-driven perturbations consistently achieve up to 100% attack success rates on benchmark datasets such as ModelNet40, ShapeNet Part, and ScanObjectNN using PointNet and DGCNN classifiers, while remaining geometrically indistinguishable from the original point clouds, beating state-of-the-art methods on various perceptibility metrics.