AI Navigate

Towards Instance Segmentation with Polygon Detection Transformers

arXiv cs.CV / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • The proposed Polygon Detection Transformer (Poly-DETR) addresses the challenge of instance segmentation by transforming it into a sparse vertex regression problem using Polar Representation, avoiding dense pixel-wise mask predictions.
  • Poly-DETR introduces Polar Deformable Attention and a Position-Aware Training Scheme to better focus on boundary cues and handle the box-to-polygon reference shifts inherent in Detection Transformers.
  • The method achieves a 4.7 mAP improvement over state-of-the-art polar-based methods on the MS COCO benchmark and significantly reduces memory consumption, particularly in high-resolution scenarios such as the Cityscapes dataset.
  • Poly-DETR outperforms mask-based counterparts across all metrics on domain-specific datasets like PanNuke for cell segmentation and SpaceNet for building footprints, demonstrating its effectiveness for regular-shaped instance segmentation.
  • The study also includes a systematic comparison between polar and mask-based representations, highlighting advantages of the polygon detection approach for lightweight and real-time inference in instance segmentation tasks.

Computer Science > Computer Vision and Pattern Recognition

arXiv:2603.09245 (cs)
[Submitted on 10 Mar 2026]

Title:Towards Instance Segmentation with Polygon Detection Transformers

View a PDF of the paper titled Towards Instance Segmentation with Polygon Detection Transformers, by Jiacheng Sun and 7 other authors
View PDF HTML (experimental)
Abstract:One of the bottlenecks for instance segmentation today lies in the conflicting requirements of high-resolution inputs and lightweight, real-time inference. To address this bottleneck, we present a Polygon Detection Transformer (Poly-DETR) to reformulate instance segmentation as sparse vertex regression via Polar Representation, thereby eliminating the reliance on dense pixel-wise mask prediction. Considering the box-to-polygon reference shift in Detection Transformers, we propose Polar Deformable Attention and Position-Aware Training Scheme to dynamically update supervision and focus attention on boundary cues. Compared with state-of-the-art polar-based methods, Poly-DETR achieves a 4.7 mAP improvement on MS COCO test-dev. Moreover, we construct a parallel mask-based counterpart to support a systematic comparison between polar and mask representations. Experimental results show that Poly-DETR is more lightweight in high-resolution scenarios, reducing memory consumption by almost half on Cityscapes dataset. Notably, on PanNuke (cell segmentation) and SpaceNet (building footprints) datasets, Poly-DETR surpasses its mask-based counterpart on all metrics, which validates its advantage on regular-shaped instances in domain-specific settings.
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:2603.09245 [cs.CV]
  (or arXiv:2603.09245v1 [cs.CV] for this version)
  https://doi.org/10.48550/arXiv.2603.09245
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Jiacheng Sun [view email]
[v1] Tue, 10 Mar 2026 06:18:33 UTC (8,855 KB)
Full-text links:

Access Paper:

Current browse context:
cs.CV
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.