x1: Learning to Think Adaptively Across Languages and Cultures
arXiv cs.CL / 4/21/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces x1, a family of reasoning models designed to adaptively choose the best reasoning language for each input rather than relying on a single dominant language.
- x1 is trained without expanding the model’s knowledge boundaries, using contrasts between linguistically distinct reasoning trajectories for the same prompts to isolate the impact of reasoning-language choice.
- Experiments show that adaptive multilingual reasoning improves performance on multilingual mathematical reasoning tasks and on culturally grounded tasks.
- The results suggest that scaling laws are not fully eliminating language effects: increased scale reduces cross-lingual gaps in procedural domains like math, but language associated with culture can still provide advantages in culturally grounded knowledge recall.
- The findings position “language choice” as a functional component of reasoning, with implications for building more generalist and globally competent reasoning models.
Related Articles

Every time a new model comes out, the old one is obsolete of course
Reddit r/LocalLLaMA

We built it during the NVIDIA DGX Spark Full-Stack AI Hackathon — and it ended up winning 1st place overall 🏆
Dev.to

Stop Losing Progress: Setting Up a Pro Jupyter Workflow in VS Code (No More Colab Timeouts!)
Dev.to

Building AgentOS: Why I’m Building the AWS Lambda for Insurance Claims
Dev.to

Where we are. In a year, everything has changed. Kimi - Minimax - Qwen - Gemma - GLM
Reddit r/LocalLLaMA