PanoSAMic: Panoramic Image Segmentation from SAM Feature Encoding and Dual View Fusion

arXiv cs.CV / 4/27/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that existing image foundation models underperform on spherical panoramic images because they are largely trained on perspective imagery.
  • PanoSAMic reuses the pre-trained Segment Anything (SAM) encoder by modifying it to output multi-stage features for semantic segmentation in panoramic settings.
  • It introduces a spatio-modal fusion module that dynamically selects relevant modalities and features per region, improving robustness across different input types.
  • For panoramic-specific challenges like distortions and edge discontinuities, the model’s decoder uses spherical attention and dual-view fusion.
  • The authors report state-of-the-art results on Stanford2D3DS (for RGB, RGB-D, and RGB-D-N) and strong performance on Matterport3D (for RGB and RGB-D), and provide an implementation link.

Abstract

Existing image foundation models are not optimized for spherical images having been trained primarily on perspective images. PanoSAMic integrates the pre-trained Segment Anything (SAM) encoder to make use of its extensive training and integrate it into a semantic segmentation model for panoramic images using multiple modalities. We modify the SAM encoder to output multi-stage features and introduce a novel spatio-modal fusion module that allows the model to select the relevant modalities and best features from each modality for different areas of the input. Furthermore, our semantic decoder uses spherical attention and dual view fusion to overcome the distortions and edge discontinuity often associated with panoramic images. PanoSAMic achieves state-of-the-art (SotA) results on Stanford2D3DS for RGB, RGB-D, and RGB-D-N modalities and on Matterport3D for RGB and RGB-D modalities. https://github.com/dfki-av/PanoSAMic