One Shot Learning for Edge Detection on Point Clouds
arXiv cs.CV / 4/27/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces a one-shot learning approach for edge extraction on point clouds, motivated by the observation that different scanners produce distinct sampling error distributions.
- It proposes training a lightweight network, OSFENet (One-Shot edge Feature Extraction Network), using a filtered-KNN-based surface patch representation tailored to one-shot learning.
- The method adds an RBF_DoS module that uses an RBF-based descriptor of surface patches to improve edge detection performance.
- Experiments on the ABC dataset compare the approach against 7 baselines, and additional evaluations on multiple real-scanned datasets (S3DIS, Semantic3D, UrbanBIS) support its practical effectiveness.
- Overall, the work shows that learning the target point cloud’s specific distribution can outperform networks trained on broader, cross-scanner data distributions.
Related Articles

Legal Insight Transformation: 7 Mistakes to Avoid When Adopting AI Tools
Dev.to

Legal Insight Transformation: Traditional vs. AI-Driven Research Compared
Dev.to

Legal Insight Transformation: A Beginner's Guide to Modern Research
Dev.to
I tested the same prompt across multiple AI models… the differences surprised me
Reddit r/artificial

The five loops between AI coding and AI engineering
Dev.to