UHR-DETR: Efficient End-to-End Small Object Detection for Ultra-High-Resolution Remote Sensing Imagery

arXiv cs.CV / 4/24/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper introduces UHR-DETR, an efficient end-to-end transformer-based detector tailored for ultra-high-resolution (UHR) remote sensing imagery where small-object detection is constrained by memory and context loss.
  • It proposes a Coverage-Maximizing Sparse Encoder that allocates limited compute to the most informative high-resolution regions to maximize object coverage while reducing redundant spatial processing.
  • It also presents a Global-Local Decoupled Decoder that combines global scene understanding with local object details to resolve semantic ambiguity and avoid scene fragmentation.
  • Experiments on datasets such as STAR and SODA-A show UHR-DETR outperforms prior approaches under strict hardware limits (e.g., a single 24GB RTX 3090), delivering +2.8% mAP and up to 10× faster inference than sliding-window baselines on STAR.
  • The authors indicate that the code and models will be released on GitHub.

Abstract

Ultra-High-Resolution (UHR) imagery has become essential for modern remote sensing, offering unprecedented spatial coverage. However, detecting small objects in such vast scenes presents a critical dilemma: retaining the original resolution for small objects causes prohibitive memory bottlenecks. Conversely, conventional compromises like image downsampling or patch cropping either erase small objects or destroy context. To break this dilemma, we propose UHR-DETR, an efficient end-to-end transformer-based detector designed for UHR imagery. First, we introduce a Coverage-Maximizing Sparse Encoder that dynamically allocates finite computational resources to informative high-resolution regions, ensuring maximum object coverage with minimal spatial redundancy. Second, we design a Global-Local Decoupled Decoder. By integrating macroscopic scene awareness with microscopic object details, this module resolves semantic ambiguities and prevents scene fragmentation. Extensive experiments on the UHR imagery datasets (e.g., STAR and SODA-A) demonstrate the superiority of UHR-DETR under strict hardware constraints (e.g., a single 24GB RTX 3090). It achieves a 2.8\% mAP improvement while delivering a 10\times inference speedup compared to standard sliding-window baselines on the STAR dataset. Our codes and models will be available at https://github.com/Li-JingFang/UHR-DETR.