LangFIR: Discovering Sparse Language-Specific Features from Monolingual Data for Language Steering

arXiv cs.CL / 4/7/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces LangFIR, a method to identify sparse language-specific SAE features from only small amounts of monolingual data by using random-token filtering to remove language-agnostic directions.
  • LangFIR shows that the resulting features are extremely sparse, highly selective for target languages, and causally important, since directional ablation increases cross-entropy loss only for the corresponding language.
  • The authors use the discovered language-specific features to build steering vectors for multilingual text generation control, improving average BLEU across three model sizes and three datasets covering twelve languages.
  • Results outperform the strongest monolingual baseline and surpass approaches that require parallel data, suggesting that language identity can be localized in sparse feature directions without costly multilingual supervision.
  • Code is released publicly, enabling researchers to reproduce and extend the language-steering feature discovery approach.

Abstract

Large language models (LLMs) show strong multilingual capabilities, yet reliably controlling the language of their outputs remains difficult. Representation-level steering addresses this by adding language-specific vectors to model activations at inference time, but identifying language-specific directions in the residual stream often relies on multilingual or parallel data that can be expensive to obtain. Sparse autoencoders (SAEs) decompose residual activations into interpretable, sparse feature directions and offer a natural basis for this search, yet existing SAE-based approaches face the same data constraint. We introduce LangFIR (Language Feature Identification via Random-token Filtering), a method that discovers language-specific SAE features using only a small amount of monolingual data and random-token sequences. Many SAE features consistently activated by target-language inputs do not encode language identity. Random-token sequences surface these language-agnostic features, allowing LangFIR to filter them out and isolate a sparse set of language-specific features. We show that these features are extremely sparse, highly selective for their target language, and causally important: directional ablation increases cross-entropy loss only for the corresponding language. Using these features to construct steering vectors for multilingual generation control, LangFIR achieves the best average accuracy BLEU across three models (Gemma 3 1B, Gemma 3 4B, and Llama 3.1 8B), three datasets, and twelve target languages, outperforming the strongest monolingual baseline by up to and surpassing methods that rely on parallel data. Our results suggest that language identity in multilingual LLMs is localized in a sparse set of feature directions discoverable with monolingual data. Code is available at https://anonymous.4open.science/r/LangFIR-C0F5/.