Non-Minimal Sampling and Consensus for Prohibitively Large Datasets

arXiv cs.CV / 4/27/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper introduces NONSAC, a framework for robust, scalable model estimation on extremely large datasets that include noise and outliers.
  • NONSAC works by repeatedly sampling non-minimal data subsets, generating multiple model hypotheses using a robust estimator, and selecting the best hypothesis via a predefined scoring rule.
  • The method is “estimator-agnostic,” allowing integration with existing geometric fitting approaches such as RANSAC to improve both robustness to outliers and scalability.
  • The authors evaluate NONSAC using scoring-rule variants on tasks including relative camera pose estimation, PnP, point cloud registration, and extend it to correspondence-free point cloud registration via all-to-all correspondence hypotheses.

Abstract

We introduce NONSAC (Non-Minimal Sampling and Consensus), a general framework for robust and scalable model estimation from arbitrarily large datasets contaminated with noise and outliers. NONSAC repeatedly samples non-minimal subsets of data and generates model hypotheses using a robust estimator, producing multiple candidate models. The final model is selected based on a predefined scoring rule that evaluates hypothesis quality. Our framework is estimator-agnostic and can be integrated with existing geometric fitting algorithms such as RANSAC to improve both scalability and robustness to outliers. We propose and evaluate various scoring rules for NONSAC on relative camera pose estimation, Perspective-n-Point, and point cloud registration. Furthermore, we showcase the applicability of NONSAC to correspondence-free point cloud registration by hypothesizing all-to-all correspondences.

Non-Minimal Sampling and Consensus for Prohibitively Large Datasets | AI Navigate