Amodal SAM: A Unified Amodal Segmentation Framework with Generalization

arXiv cs.CV / 4/23/2026

📰 NewsModels & Research

Key Points

  • The paper proposes “Amodal SAM,” a unified framework that adapts Meta’s SAM to perform both amodal image and amodal video segmentation, including occluded regions.
  • It maintains SAM’s strong generalization while extending it to amodal segmentation through a Spatial Completion Adapter for reconstructing hidden parts.
  • To address limited amodal annotations, it introduces Target-Aware Occlusion Synthesis (TAOS), a pipeline that creates diverse synthetic training data.
  • It also adds new learning objectives to enforce regional consistency and topological regularization, improving the quality and coherence of predicted shapes.
  • Experiments report state-of-the-art results on standard benchmarks and demonstrate robust generalization to novel object categories and unseen contexts.

Abstract

Amodal segmentation is a challenging task that aims to predict the complete geometric shape of objects, including their occluded regions. Although existing methods primarily focus on amodal segmentation within the training domain, these approaches often lack the generalization capacity to extend effectively to novel object categories and unseen contexts. This paper introduces Amodal SAM, a unified framework that leverages SAM (Segment Anything Model) for both amodal image and amodal video segmentation. Amodal SAM preserves the powerful generalization ability of SAM while extending its inherent capabilities to the amodal segmentation task. The improvements lie in three aspects: (1) a lightweight Spatial Completion Adapter that enables occluded region reconstruction, (2) a Target-Aware Occlusion Synthesis (TAOS) pipeline that addresses the scarcity of amodal annotations by generating diverse synthetic training data, and (3) novel learning objectives that enforce regional consistency and topological regularization. Extensive experiments demonstrate that Amodal SAM achieves state-of-the-art performance on standard benchmarks, while simultaneously exhibiting robust generalization to novel scenarios. We anticipate that this research will advance the field toward practical amodal segmentation systems capable of operating effectively in unconstrained real-world environments.