A Universal Vibe? Finding and Controlling Language-Agnostic Informal Register with SAEs

arXiv cs.CL / 3/30/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates whether multilingual LLMs represent culture-specific pragmatic registers (e.g., slang) as language-agnostic abstractions or as separate language-specific memorization using Sparse Autoencoders (SAEs) on Gemma-2-9B-IT.
  • It introduces a new probing dataset designed to disentangle pragmatic register from lexical sensitivity by using polysemous terms that appear in both literal and informal contexts.
  • The authors find a small but highly robust cross-linguistic “core” of informal-register features that forms a geometrically coherent informal-register subspace, becoming clearer in deeper model layers.
  • Using activation steering, they show causal shifts in output formality across all tested source languages and report zero-shot transfer to six unseen languages across different families and scripts.
  • The results are presented as first mechanistic evidence that multilingual LLMs encode informal register as a portable pragmatic abstraction rather than only surface-level heuristics.

Abstract

While multilingual language models successfully transfer factual and syntactic knowledge across languages, it remains unclear whether they process culture-specific pragmatic registers, such as slang, as isolated language-specific memorizations or as unified, abstract concepts. We study this by probing the internal representations of Gemma-2-9B-IT using Sparse Autoencoders (SAEs) across three typologically diverse source languages: English, Hebrew, and Russian. To definitively isolate pragmatic register processing from trivial lexical sensitivity, we introduce a novel dataset in which every target term is polysemous, appearing in both literal and informal contexts. We find that while much of the informal-register signal is distributed across language-specific features, a small but highly robust cross-linguistic core consistently emerges. This shared core forms a geometrically coherent ``informal register subspace'' that sharpens in the model's deeper layers. Crucially, these shared representations are not merely correlational: activation steering with these features causally shifts output formality across all source languages and transfers zero-shot to six unseen languages spanning diverse language families and scripts. Together, these results provide the first mechanistic evidence that multilingual LLMs internalize informal register not just as surface-level heuristics, but as a portable, language-agnostic pragmatic abstraction.