ViCrop-Det: Spatial Attention Entropy Guided Cropping for Training-Free Small-Object Detection

arXiv cs.CV / 4/30/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • ViCrop-Det is a training-free inference framework for small-object detection that targets feature degradation caused by Transformers’ uniform global receptive fields in spatially heterogeneous images.
  • It uses Spatial Attention Entropy (SAE) derived from the detection decoder’s cross-attention to estimate local spatial ambiguity, enabling adaptive “spatial trust region” shrinkage and dynamic routing.
  • The method reallocates a fixed computational budget only to regions that show both high target saliency and high uncertainty, injecting localized high-frequency observations to recover fine-grained features.
  • Experiments on VisDrone and DOTA-v1.5 show consistent gains of about +1–3 mAP@50 when applied to RT-DETR-R50 and Deformable DETR, with only ~20–23% latency overhead.
  • On MS COCO, it improves small-object performance (AP_S) while keeping medium/large performance stable, and compute-matched comparisons outperform uniform cropping/slicing baselines on the accuracy–speed trade-off.

Abstract

Transformer-based architectures have established a dominant paradigm in global semantic perception; however, they remain fundamentally constrained by the profound spatial heterogeneity inherent in natural images. Specifically, the imposition of a uniform global receptive field across regions of varying information density inevitably leads to local feature degradation, particularly in dense conflict zones populated by microscopic targets. To address this mechanistic limitation, we propose ViCrop-Det, a training-free inference framework that introduces adaptive spatial trust region shrinkage. Inspired by the use of attention entropy in anomaly segmentation, ViCrop-Det leverages the detection decoder's cross-attention distribution as an endogenous probe. By utilizing Spatial Attention Entropy (SAE) to heuristically evaluate local spatial ambiguity, the framework executes dynamic spatial routing, allocating a fixed computational budget exclusively to regions exhibiting both high target saliency and high cognitive uncertainty. By shrinking the spatial trust region and injecting high-frequency localized observations, ViCrop-Det actively resolves spatial ambiguity and recovers fine-grained features without requiring architectural modifications. Extensive evaluations on VisDrone and DOTA-v1.5 demonstrate that ViCrop-Det yields competitive performance enhancements, consistently adding +1-3 mAP@50 to RT-DETR-R50 and Deformable DETR with a marginal 20-23\% latency overhead. On MS COCO, AP_{S} improves while AP_{M}/AP_{L} remains stable, indicating precise fine-scale refinement without compromising the global spatial prior. Under compute-matched settings, our adaptive routing strategy comprehensively surpasses uniform slicing baselines, achieving a highly optimized accuracy-speed trade-off.

ViCrop-Det: Spatial Attention Entropy Guided Cropping for Training-Free Small-Object Detection | AI Navigate