x1: Learning to Think Adaptively Across Languages and Cultures

arXiv cs.CL / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces x1, a family of reasoning models designed to adaptively choose the best reasoning language for each input rather than relying on a single dominant language.
  • x1 is trained without expanding the model’s knowledge boundaries, using contrasts between linguistically distinct reasoning trajectories for the same prompts to isolate the impact of reasoning-language choice.
  • Experiments show that adaptive multilingual reasoning improves performance on multilingual mathematical reasoning tasks and on culturally grounded tasks.
  • The results suggest that scaling laws are not fully eliminating language effects: increased scale reduces cross-lingual gaps in procedural domains like math, but language associated with culture can still provide advantages in culturally grounded knowledge recall.
  • The findings position “language choice” as a functional component of reasoning, with implications for building more generalist and globally competent reasoning models.

Abstract

Languages encode distinct abstractions and inductive priors, yet most large language models (LLMs) overlook this diversity by reasoning in a single dominant language. In this work, we introduce x1, a family of reasoning models that can adaptively reason in an advantageous language on a per-instance basis. To isolate the effect of reasoning-language choice, x1 is constructed without expanding the model's knowledge boundaries and is trained by contrasting linguistically distinct reasoning trajectories for the same input. Our extensive experiments demonstrate the benefits of adaptive multilingual reasoning across multilingual mathematical reasoning and culturally grounded tasks. Moreover, our results challenge a simplistic view of scaling laws: while scaling reduces cross-lingual disparities in procedural domains such as math reasoning, it does not eliminate the advantages of culture-associated languages in culturally grounded tasks, as we empirically show that such reasoning enables more efficient and accurate cultural knowledge recall. Overall, our findings establish language choice as a functional component of reasoning, with implications for building more generalist and globally competent reasoning models.