Steering LLMs for Culturally Localized Generation

arXiv cs.CL / 3/25/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that globally deployed LLMs can exhibit cultural bias due to uneven training data, and that existing localization methods (prompting, post-training alignment) are difficult to control and diagnose.
  • It introduces a mechanistic interpretability approach using sparse autoencoders to find interpretable features representing culturally salient information and aggregates them into Cultural Embeddings (CuE).
  • The authors use CuE for both analysis—diagnosing bias under underspecified prompts—and for white-box “steering” interventions to guide generation toward specific cultural content.
  • Experiments across multiple models show CuE-based steering improves cultural faithfulness and increases the elicitation of rarer, long-tail cultural concepts compared with prompting alone, and can complement black-box localization methods.
  • The results suggest failures may often stem from elicitation rather than missing long-tail knowledge, with variation across cultures, and the method provides diagnostic plus controllable capabilities for culturally localized generation.

Abstract

LLMs are deployed globally, yet produce responses biased towards cultures with abundant training data. Existing cultural localization approaches such as prompting or post-training alignment are black-box, hard to control, and do not reveal whether failures reflect missing knowledge or poor elicitation. In this paper, we address these gaps using mechanistic interpretability to uncover and manipulate cultural representations in LLMs. Leveraging sparse autoencoders, we identify interpretable features that encode culturally salient information and aggregate them into Cultural Embeddings (CuE). We use CuE both to analyze implicit cultural biases under underspecified prompts and to construct white-box steering interventions. Across multiple models, we show that CuE-based steering increases cultural faithfulness and elicits significantly rarer, long-tail cultural concepts than prompting alone. Notably, CuE-based steering is complementary to black-box localization methods, offering gains when applied on top of prompt-augmented inputs. This also suggests that models do benefit from better elicitation strategies, and don't necessarily lack long-tail knowledge representation, though this varies across cultures. Our results provide both diagnostic insight into cultural representations in LLMs and a controllable method to steer towards desired cultures.