CoT2-Meta: Budgeted Metacognitive Control for Test-Time Reasoning

arXiv cs.AI / 3/31/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • CoT2-Meta is a training-free test-time reasoning framework that adds metacognitive control to object-level chain-of-thought generation.
  • It uses a meta-controller to decide when to expand, prune, repair, stop, or fall back, guided by strategy-conditioned generation and tree-structured search.
  • An online process oracle evaluates step-level reasoning trajectories, enabling more targeted computation allocation under fixed inference budgets.
  • Across standard benchmarks (e.g., MATH, GPQA, GSM8K, BBEH, MMMU-Pro, HLE), CoT2-Meta outperforms strong baselines including ReST-MCTS, with reported gains ranging from about +1.15 to +5.2 points on key tasks.
  • The paper also reports improved compute scaling, calibration/selective prediction, and consistent effectiveness across a broader set of 15 benchmarks and multiple backbone families.

Abstract

Recent test-time reasoning methods improve performance by generating more candidate chains or searching over larger reasoning trees, but they typically lack explicit control over when to expand, what to prune, how to repair, and when to abstain. We introduce CoT2-Meta, a training-free metacognitive reasoning framework that combines object-level chain-of-thought generation with meta-level control over partial reasoning trajectories. The framework integrates four components: strategy-conditioned thought generation, tree-structured search, an online process oracle for step-level reasoning evaluation, and a meta-controller that allocates computation through expansion, pruning, repair, stopping, and fallback decisions. Under matched inference budgets, CoT2-Meta consistently outperforms strong single-path, sampling-based, and search-based baselines, including ReST-MCTS. On the default backbone, it achieves 92.8 EM on MATH, 90.4 accuracy on GPQA, 98.65 EM on GSM8K, 75.8 accuracy on BBEH, 85.6 accuracy on MMMU-Pro, and 48.8 accuracy on HLE, with gains over the strongest non-CoT2-Meta baseline of +3.6, +5.2, +1.15, +2.0, +4.3, and +4.3 points, respectively. Beyond these core results, the framework remains effective across a broader 15-benchmark suite spanning knowledge and QA, multi-hop reasoning, coding, and out-of-distribution evaluation. Additional analyses show better compute scaling, improved calibration, stronger selective prediction, targeted repair behavior, and consistent gains across backbone families. These results suggest that explicit metacognitive control is a practical design principle for reliable and compute-efficient test-time reasoning systems.