One Shot Learning for Edge Detection on Point Clouds

arXiv cs.CV / 4/27/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a one-shot learning approach for edge extraction on point clouds, motivated by the observation that different scanners produce distinct sampling error distributions.
  • It proposes training a lightweight network, OSFENet (One-Shot edge Feature Extraction Network), using a filtered-KNN-based surface patch representation tailored to one-shot learning.
  • The method adds an RBF_DoS module that uses an RBF-based descriptor of surface patches to improve edge detection performance.
  • Experiments on the ABC dataset compare the approach against 7 baselines, and additional evaluations on multiple real-scanned datasets (S3DIS, Semantic3D, UrbanBIS) support its practical effectiveness.
  • Overall, the work shows that learning the target point cloud’s specific distribution can outperform networks trained on broader, cross-scanner data distributions.

Abstract

Each scanner possesses its unique characteristics and exhibits its distinct sampling error distribution. Training a network on a dataset that includes data collected from different scanners is less effective than training it on data specific to a single scanner. Therefore, we present a novel one-shot learning method allowing for edge extraction on point clouds, by learning the specific data distribution of the target point cloud, and thus achieve superior results compared to networks that were trained on general data distributions. More specifically, we present how to train a lightweight network named OSFENet (One-Shot edge Feature Extraction Network), by designing a filtered-KNN-based surface patch representation that supports a one-shot learning framework. Additionally, we introduce an RBF_DoS module, which integrates Radial Basis Function-based Descriptor of the Surface patch, highly beneficial for the edge extraction on point clouds. The advantage of the proposed OSFENet is demonstrated through comparative analyses against 7 baselines on the ABC dataset, and its practical utility is validated by results across diverse real-scanned datasets, including indoor scenes like S3DIS dataset, and outdoor scenes such as the Semantic3D dataset and UrbanBIS dataset.