Amodal SAM: A Unified Amodal Segmentation Framework with Generalization
arXiv cs.CV / 4/23/2026
📰 NewsModels & Research
Key Points
- The paper proposes “Amodal SAM,” a unified framework that adapts Meta’s SAM to perform both amodal image and amodal video segmentation, including occluded regions.
- It maintains SAM’s strong generalization while extending it to amodal segmentation through a Spatial Completion Adapter for reconstructing hidden parts.
- To address limited amodal annotations, it introduces Target-Aware Occlusion Synthesis (TAOS), a pipeline that creates diverse synthetic training data.
- It also adds new learning objectives to enforce regional consistency and topological regularization, improving the quality and coherence of predicted shapes.
- Experiments report state-of-the-art results on standard benchmarks and demonstrate robust generalization to novel object categories and unseen contexts.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.
Related Articles

Training ChatGPT on Private Data: A Technical Reference
Dev.to
AI as a Fascist Artifact
Dev.to
Sony Ace: el robot que ganó 3 de 5 a élites de ping-pong en Nature
Dev.to

OpenAI releases open-source model that strips personal data from text
THE DECODER

Researchers warn US politics is repeating its ChatGPT mistake with world models
THE DECODER