What Makes Good Multilingual Reasoning? Disentangling Reasoning Traces with Measurable Features

arXiv cs.CL / 4/7/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that multilingual reasoning quality is not simply a matter of making reasoning in other languages look like English, and instead investigates which measurable characteristics actually predict accuracy.
  • It introduces a set of measurable reasoning-trace features covering multilingual alignment, reasoning steps, and reasoning flow, then uses logistic regression to quantify their relationship to final answer accuracy.
  • By training sparse autoencoders on multilingual traces, the authors discover latent reasoning concepts that underpin or extend the proposed features.
  • Experiments across two mathematical reasoning benchmarks, four large reasoning models, and 10 languages show that while most features correlate positively with accuracy overall, the strength—and even direction—of these associations can vary substantially by language.
  • The results challenge English-centric reward/optimization designs and suggest the need for adaptive, language-aware objectives for multilingual benchmark and reward design.

Abstract

Large Reasoning Models (LRMs) still exhibit large performance gaps between English and other languages, yet much current work assumes these gaps can be closed simply by making reasoning in every language resemble English reasoning. This work challenges this assumption by asking instead: what actually characterizes effective reasoning in multilingual settings, and to what extent do English-derived reasoning features genuinely help in other languages? We first define a suite of measurable reasoning features spanning multilingual alignment, reasoning step, and reasoning flow aspects of reasoning traces, and use logistic regression to quantify how each feature associates with final answer accuracy. We further train sparse autoencoders over multilingual traces to automatically discover latent reasoning concepts that instantiate or extend these features. Finally, we use the features as test-time selection policies to examine whether they can steer models toward stronger multilingual reasoning. Across two mathematical reasoning benchmarks, four LRMs, and 10 languages, we find that most features are positively associated with accuracy, but the strength of association varies considerably across languages and can even reverse in some. Our findings challenge English-centric reward designs and point toward adaptive objectives that accommodate language-specific reasoning patterns, with concrete implications for multilingual benchmark and reward design.