Who Defines Fairness? Target-Based Prompting for Demographic Representation in Generative Models
arXiv cs.AI / 4/25/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- Text-to-image models can reproduce demographic and professional stereotypes, such as producing lighter skin tones for roles like “doctor” or “CEO” and more diverse (often darker) depictions for lower-status roles like “janitor.”
- Existing bias-mitigation approaches often require retraining or curated datasets, limiting accessibility for most users.
- The paper proposes a lightweight, inference-time prompting framework that intervenes at the prompt level without changing the underlying generative model.
- Rather than enforcing a single notion of “fairness,” the method lets users choose among multiple fairness specifications, including uniform targets or more complex LLM-based definitions with cited sources and confidence estimates.
- Experiments with 36 prompts across 30 occupations and 6 other contexts show skin-tone distributions shift toward the declared target and deviate less when fairness targets are specified directly in skin-tone space.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.
Related Articles
Navigating WooCommerce AI Integrations: Lessons for Agencies & Developers from a Bluehost Conflict
Dev.to

One Day in Shenzhen, Seen Through an AI's Eyes
Dev.to

Underwhelming or underrated? DeepSeek V4 shows “impressive” gains
SCMP Tech

Claude Code: Hooks, Subagents, and Skills — Complete Guide
Dev.to

Finding the Gold: An AI Framework for Highlight Detection
Dev.to