Less Languages, Less Tokens: An Efficient Unified Logic Cross-lingual Chain-of-Thought Reasoning Framework
arXiv cs.CL / 4/23/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces UL-XCoT, an efficient unified-logic framework for cross-lingual chain-of-thought reasoning that reduces both latency and redundant token use compared with costly full-trajectory sampling methods.
- UL-XCoT improves efficiency by selecting a small candidate set of languages per query within a language-invariant unified logic space, rather than evaluating many languages broadly.
- During decoding, it prunes low-quality reasoning paths by monitoring trajectory dynamics in the logic space, and then aggregates the remaining high-quality trajectories using voting.
- Experiments on PolyMath (18 languages) and MMLU-ProX-Lite (29 languages) using DeepSeek-R1-DistillQwen-7B show competitive accuracy while cutting decoding token costs by over 50% versus prior sampling baselines.
- The method provides more stable gains for low-resource languages, where standard XCoT self-consistency approaches often fail to deliver consistent improvements.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans
Dev.to

Why use an AI gateway at all?
Dev.to

OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Dev.to

GPT Image 2 Subject-Lock Editing: A Practical Guide to input_fidelity
Dev.to