Learning Hierarchical Orthogonal Prototypes for Generalized Few-Shot 3D Point Cloud Segmentation

arXiv cs.CV / 3/23/2026

📰 NewsModels & Research

Key Points

  • HOP3D introduces hierarchical orthogonalization to decouple base and novel learning at both the gradient and representation levels, effectively mitigating base–novel interference in generalized few-shot 3D point cloud segmentation.
  • It adds an entropy-based few-shot regularizer that leverages predictive uncertainty to refine prototype learning and promote balanced predictions under sparse supervision.
  • The framework demonstrates consistent improvements over state-of-the-art baselines on ScanNet200 and ScanNet++ in both 1-shot and 5-shot settings.
  • The authors provide code for the approach at the project page https://fdueblab-hop3d.github.io/.

Abstract

Generalized few-shot 3D point cloud segmentation aims to adapt to novel classes from only a few annotations while maintaining strong performance on base classes, but this remains challenging due to the inherent stability-plasticity trade-off: adapting to novel classes can interfere with shared representations and cause base-class forgetting. We present HOP3D, a unified framework that learns hierarchical orthogonal prototypes with an entropy-based few-shot regularizer to enable robust novel-class adaptation without degrading base-class performance. HOP3D introduces hierarchical orthogonalization that decouples base and novel learning at both the gradient and representation levels, effectively mitigating base-novel interference. To further enhance adaptation under sparse supervision, we incorporate an entropy-based regularizer that leverages predictive uncertainty to refine prototype learning and promote balanced predictions. Extensive experiments on ScanNet200 and ScanNet++ demonstrate that HOP3D consistently outperforms state-of-the-art baselines under both 1-shot and 5-shot settings. The code is available at https://fdueblab-hop3d.github.io/.