$\pi^2$: Structure-Originated Reasoning Data Improves Long-Context Reasoning Ability of Large Language Models

arXiv cs.LG / 4/8/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a dataset-and-training pipeline called π^2 that curates reasoning data starting from structured sources to improve long-context reasoning in LLMs.
  • π^2 builds multi-hop analytical QA pairs by extracting and expanding tables from Wikipedia, then generating questions whose answers are automatically determined and verified via dual-path code execution.
  • It produces training examples by back-translating step-by-step structured reasoning traces into solutions under realistic web-search context.
  • Supervised fine-tuning of gpt-oss-20b and Qwen3-4B-Instruct-2507 on π^2 delivers consistent gains on multiple long-context reasoning benchmarks (average +4.3% and +2.7%, respectively).
  • The dataset also supports self-distillation, with gpt-oss-20b improving its own average performance by an additional +4.4%, and the authors open-source code/data/models at the provided GitHub link.

Abstract

We study a pipeline that curates reasoning data from initial structured data for improving long-context reasoning in large language models (LLMs). Our approach, \pi^2, constructs high-quality reasoning data through rigorous QA curation: 1) extracting and expanding tables from Wikipedia, 2) from the collected tables and relevant context, generating realistic and multi-hop analytical reasoning questions whose answers are automatically determined and verified through dual-path code execution, and 3) back-translating step-by-step structured reasoning traces as solutions of QA pairs given realistic web-search context. Supervised fine-tuning with \textsc{\small{gpt-oss-20b}} and \textsc{\small{Qwen3-4B-Instruct-2507}} on \pi^2 yields consistent improvements across four long-context reasoning benchmarks and our alike \pi^2-Bench, with average absolute accuracy gains of +4.3% and +2.7% respectively. Notably, our dataset facilitates self-distillation, where \textsc{\small{gpt-oss-20b}} even improves its average performance by +4.4% with its own reasoning traces, demonstrating \pi^2's usefulness. Our code, data, and models are open-source at https://github.com/vt-pi-squared/pi-squared.