AI Navigate

Evaluating Vision Foundation Models for Pixel and Object Classification in Microscopy

arXiv cs.CV / 3/23/2026

📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The study evaluates general-purpose VFMs (SAM, SAM2, DINOv3) and domain-specific VFMs (μSAM, PathoSAM) for pixel-wise semantic classification and object-level classification in microscopy.
  • It uses shallow learning with attentive probing on five diverse datasets to benchmark VFMs in microscopy.
  • The results show consistent improvements over hand-crafted features, suggesting practical benefits for biomedical imaging tasks.
  • The work establishes a benchmark and lays out a clear pathway for future development of VFMs in microscopy.

Abstract

Deep learning underlies most modern approaches and tools in computer vision, including biomedical imaging. However, for interactive semantic segmentation (often called pixel classification in this context) and interactive object-level classification (object classification), feature-based shallow learning remains widely used. This is due to the diversity of data in this domain, the lack of large pretraining datasets, and the need for computational and label efficiency. In contrast, state-of-the-art tools for many other vision tasks in microscopy - most notably cellular instance segmentation - already rely on deep learning and have recently benefited substantially from vision foundation models (VFMs), particularly SAM. Here, we investigate whether VFMs can also improve pixel and object classification compared to current approaches. To this end, we evaluate several VFMs, including general-purpose models (SAM, SAM2, DINOv3) and domain-specific ones (\muSAM, PathoSAM), in combination with shallow learning and attentive probing on five diverse and challenging datasets. Our results demonstrate consistent improvements over hand-crafted features and provide a clear pathway toward practical improvements. Furthermore, our study establishes a benchmark for VFMs in microscopy and informs future developments in this area.