Training-free Detection and 6D Pose Estimation of Unseen Surgical Instruments

arXiv cs.CV / 3/27/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a training-free pipeline for detecting and estimating the 6D pose of unseen surgical instruments using only a textured CAD model as prior knowledge.
  • For detection, it generates per-view object mask proposals, scores them against rendered templates with a pre-trained feature extractor, matches them across views, triangulates 3D candidates, and filters results via multi-view geometric consistency.
  • For pose estimation, it iteratively refines pose hypotheses using feature-metric scoring with cross-view attention, then performs a final multi-view occlusion-aware contour registration to minimize reprojection errors on unoccluded contours.
  • Evaluations on real-world MVPSP dataset surgical data show millimeter-accurate pose estimates comparable to supervised methods in controlled settings while generalizing to instruments not seen during development.
  • The work highlights the feasibility of marker-less, training-free instrument tracking and scene understanding in dynamic clinical environments, addressing limitations of supervised approaches that require large labeled datasets.

Abstract

Purpose: Accurate detection and 6D pose estimation of surgical instruments are crucial for many computer-assisted interventions. However, supervised methods lack flexibility for new or unseen tools and require extensive annotated data. This work introduces a training-free pipeline for accurate multi-view 6D pose estimation of unseen surgical instruments, which only requires a textured CAD model as prior knowledge. Methods: Our pipeline consists of two main stages. First, for detection, we generate object mask proposals in each view and score their similarity to rendered templates using a pre-trained feature extractor. Detections are matched across views, triangulated into 3D instance candidates, and filtered using multi-view geometric consistency. Second, for pose estimation, a set of pose hypotheses is iteratively refined and scored using feature-metric scores with cross-view attention. The best hypothesis undergoes a final refinement using a novel multi-view, occlusion-aware contour registration, which minimizes reprojection errors of unoccluded contour points. Results: The proposed method was rigorously evaluated on real-world surgical data from the MVPSP dataset. The method achieves millimeter-accurate pose estimates that are on par with supervised methods under controlled conditions, while maintaining full generalization to unseen instruments. These results demonstrate the feasibility of training-free, marker-less detection and tracking in surgical scenes, and highlight the unique challenges in surgical environments. Conclusion: We present a novel and flexible pipeline that effectively combines state-of-the-art foundational models, multi-view geometry, and contour-based refinement for high-accuracy 6D pose estimation of surgical instruments without task-specific training. This approach enables robust instrument tracking and scene understanding in dynamic clinical environments.
広告