Looking Beyond the Window: Global-Local Aligned CLIP for Training-free Open-Vocabulary Semantic Segmentation

arXiv cs.CV / 3/25/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper identifies a limitation in training-free open-vocabulary semantic segmentation methods that use sliding-window inference: independent window processing causes semantic discrepancies across windows.
  • It proposes Global-Local Aligned CLIP (GLA-CLIP), which extends CLIP key-value tokens to enable information exchange across all windows instead of restricting attention to local window tokens.
  • The authors address a “window bias” problem where outer-window tokens receive less attention by introducing a proxy anchor that aggregates highly query-relevant tokens from all windows as a unified semantic reference.
  • To improve robustness for small objects, GLA-CLIP adds a dynamic normalization scheme that scales and thresholds attention based on object scale.
  • The method is reported to work as a plug-in enhancement for existing approaches, broadening receptive fields, and is supported by extensive experiments with released code.

Abstract

A sliding-window inference strategy is commonly adopted in recent training-free open-vocabulary semantic segmentation methods to overcome limitation of the CLIP in processing high-resolution images. However, this approach introduces a new challenge: each window is processed independently, leading to semantic discrepancy across windows. To address this issue, we propose Global-Local Aligned CLIP~(GLA-CLIP), a framework that facilitates comprehensive information exchange across windows. Rather than limiting attention to tokens within individual windows, GLA-CLIP extends key-value tokens to incorporate contextual cues from all windows. Nevertheless, we observe a window bias: outer-window tokens are less likely to be attended, since query features are produced through interactions within the inner window patches, thereby lacking semantic grounding beyond their local context. To mitigate this, we introduce a proxy anchor, constructed by aggregating tokens highly similar to the given query from all windows, which provides a unified semantic reference for measuring similarity across both inner- and outer-window patches. Furthermore, we propose a dynamic normalization scheme that adjusts attention strength according to object scale by dynamically scaling and thresholding the attention map to cope with small-object scenarios. Moreover, GLA-CLIP can be equipped on existing methods and broad their receptive field. Extensive experiments validate the effectiveness of GLA-CLIP in enhancing training-free open-vocabulary semantic segmentation performance. Code is available at https://github.com/2btlFe/GLA-CLIP.