SPAGBias: Uncovering and Tracing Structured Spatial Gender Bias in Large Language Models
arXiv cs.CL / 4/17/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces SPAGBias, a systematic framework for evaluating spatial gender bias in large language models used in contexts like urban planning.
- SPAGBias combines a taxonomy of 62 urban micro-spaces, a prompt library, and three diagnostic layers (explicit forced-choice resampling, probabilistic token-level asymmetry, and constructional semantic/narrative role analysis).
- Experiments on six representative models find structured, micro-level gender–space associations that extend beyond the common public–private divide and influence how “spatial gender narratives” are generated.
- The study shows that prompt design, temperature, and model scale affect how bias is expressed, and tracing experiments suggest the patterns are reinforced across the model pipeline (pre-training, instruction tuning, and reward modeling) and exceed real-world distributions.
- Downstream evaluations indicate these biases can cause concrete failures in both normative and descriptive application settings, linking sociological theory of gendered space with computational bias measurement.


![[2026] OpenTelemetry for LLM Observability — Self-Hosted Setup](/_next/image?url=https%3A%2F%2Fmedia2.dev.to%2Fdynamic%2Fimage%2Fwidth%3D1200%2Cheight%3D627%2Cfit%3Dcover%2Cgravity%3Dauto%2Cformat%3Dauto%2Fhttps%253A%252F%252Fdev-to-uploads.s3.amazonaws.com%252Fuploads%252Farticles%252Flu4b6ttuhur71z5gemm0.png&w=3840&q=75)
