Playing Psychic: Using Thought Trees to Predict Reasoning Models Accuracy on Coding Tasks

arXiv cs.AI / 4/21/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies how frontier reasoning LLMs perform on real-world coding benchmarks, extending evaluation beyond standard competitive programming tests.
  • It introduces a method to automatically generate new coding tasks of arbitrary difficulty and structure from existing benchmarks.
  • The authors find that not only the contents but also the structure of a model’s reasoning trace is a strong predictor of whether the final answer is correct.
  • They propose “structured thought-trees” to represent reasoning traces, train a lightweight classifier to assess trace correctness from extracted features, and show that flagging and retrying structurally anomalous traces improves accuracy in lower-complexity settings.

Abstract

Recent advances in large language models (LLMs) have shown that test-time scaling can substantially improve model performance on complex tasks, particularly in the coding domain. Under this paradigm, models use a larger token budget during inference to generate intermediate reasoning traces before producing a final answer. However, current evaluations primarily rely on competitive programming benchmarks, which may not capture the full range of reasoning abilities. In this work, we perform a systematic study of frontier reasoning models to understand their performance on real-world coding benchmarks. To gain more insights into the performance of such models, we devise a programmatic way to {\em automatically generate} coding tasks of arbitrary difficulty and structure from existing benchmarks. Using this framework, our analysis reveals that the structure of a reasoning trace, not just its contents, is a strong predictor of correctness. Motivated by this, we propose structured thought-trees as means to represent reasoning traces. To illustrate their use, we train a lightweight classifier on features extracted from thought-trees to predict trace correctness, and demonstrate that flagging and retrying structurally anomalous traces based on the extracted features yields consistent gains at lower complexity levels.