Large language models perceive cities through a culturally uneven baseline
arXiv cs.CL / 4/23/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The study tests whether frontier large language models describe cities in a culturally neutral way by using a balanced global street-view sample and culturally neutral vs. regionally prompted queries.
- Results show the “neutral” prompting condition is not actually neutral: outputs tied to Europe and North America stay systematically closer to an underlying baseline than many non-Western prompts.
- Cultural prompting changes not only descriptive judgments but also affective evaluations, including sentiment-based ingroup preference for certain prompted identities.
- Even when culturally closer prompting improves alignment with human descriptions, it fails to fully recover human semantic diversity and often retains an affectively elevated style; similar partial reproduction occurs in structured judgments (e.g., safety, beauty, wealth, and well-being-related impressions).
- Overall, the paper argues LLMs perceive cities through a culturally uneven reference frame rather than a universal standpoint, shaping what feels ordinary, familiar, and positively valued.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans
Dev.to

Why use an AI gateway at all?
Dev.to

OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Dev.to

GPT Image 2 Subject-Lock Editing: A Practical Guide to input_fidelity
Dev.to