Selective Aggregation of Attention Maps Improves Diffusion-Based Visual Interpretation

arXiv cs.CV / 4/8/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies how cross-attention maps from different heads behave in text-to-image (T2I) diffusion models, noting that head-wise differences have been less explored for interpretability.
  • It proposes selective aggregation of cross-attention maps by choosing heads most relevant to a target concept, rather than aggregating uniformly.
  • Compared with DAAM, the proposed approach improves diffusion-based visual interpretation performance, reporting higher mean IoU scores.
  • The authors find that relevant heads better capture concept-specific features than less relevant heads, and that selective aggregation can help diagnose prompt misinterpretations.
  • Overall, the work suggests attention head selection is a promising method to improve both interpretability and controllability of T2I generation.

Abstract

Numerous studies on text-to-image (T2I) generative models have utilized cross-attention maps to boost application performance and interpret model behavior. However, the distinct characteristics of attention maps from different attention heads remain relatively underexplored. In this study, we show that selectively aggregating cross-attention maps from heads most relevant to a target concept can improve visual interpretability. Compared to the diffusion-based segmentation method DAAM, our approach achieves higher mean IoU scores. We also find that the most relevant heads capture concept-specific features more accurately than the least relevant ones, and that selective aggregation helps diagnose prompt misinterpretations. These findings suggest that attention head selection offers a promising direction for improving the interpretability and controllability of T2I generation.