GLEaN: A Text-to-image Bias Detection Approach for Public Comprehension
arXiv cs.AI / 4/14/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisTools & Practical UsageModels & Research
Key Points
- GLEaN is a portrait-based, model-agnostic text-to-image bias explainability pipeline aimed at making T2I biases understandable to non-technical audiences.
- The method generates images from identity prompts, filters and aligns them using facial landmarks, and then creates a median-pixel composite that visually summarizes the model’s central tendency.
- Applied to Stable Diffusion XL across 40 identity prompts, GLEaN reproduces known biases and also surfaces new associations, such as links between skin tone and predicted emotion.
- In a user study (N=291), GLEaN portraits communicated bias findings as effectively as conventional tables while significantly reducing the time required to view and interpret results.
- Because it uses only generated outputs, GLEaN can be replicated on black-box systems without access to internal model details, and the code is released on GitHub.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.




