PRISM-MCTS: Learning from Reasoning Trajectories with Metacognitive Reflection

arXiv cs.AI / 4/8/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes PRISM-MCTS, a reasoning framework that improves over prior MCTS-style methods by sharing information across rollouts rather than treating each trajectory as isolated.
  • PRISM-MCTS combines a Process Reward Model (PRM) with a dynamic shared memory to capture both effective heuristics and recurring fallacies, reinforcing good branches and pruning error-prone ones.
  • The authors introduce a data-efficient few-shot training strategy for the PRM, enabling high-fidelity evaluation without large-scale training data.
  • Experiments on multiple reasoning benchmarks show PRISM-MCTS reduces required trajectories by about half on GPQA and outperforms baselines including MCTS-RAG and Search-o1, emphasizing more judicious inference compute.
  • The work positions test-time computation as a more central factor than classic pre-training scaling laws for deliberative reasoning models, motivating more efficient search-and-reflection methods.

Abstract

PRISM-MCTS: Learning from Reasoning Trajectories with Metacognitive Reflection Siyuan Cheng, Bozhong Tian, Yanchao Hao, Zheng Wei Published: 06 Apr 2026, Last Modified: 06 Apr 2026 ACL 2026 Findings Conference, Area Chairs, Reviewers, Publication Chairs, Authors Revisions BibTeX CC BY 4.0 Keywords: Efficient/Low-Resource Methods for NLP, Generation, Question Answering Abstract: The emergence of reasoning models, exemplified by OpenAI o1, signifies a transition from intuitive to deliberative cognition, effectively reorienting the scaling laws from pre-training paradigms toward test-time computation. While Monte Carlo Tree Search (MCTS) has shown promise in this domain, existing approaches typically treat each rollout as an isolated trajectory. This lack of information sharing leads to severe inefficiency and substantial computational redundancy, as the search process fails to leverage insights from prior explorations. To address these limitations, we propose PRISM-MCTS, a novel reasoning framework that draws inspiration from human parallel thinking and reflective processes. PRISM-MCTS integrates a Process Reward Model (PRM) with a dynamic shared memory, capturing both "Heuristics" and "Fallacies". By reinforcing successful strategies and pruning error-prone branches, PRISM-MCTS effectively achieves refinement. Furthermore, we develop a data-efficient training strategy for the PRM, achieving high-fidelity evaluation under a few-shot regime. Empirical evaluations across diverse reasoning benchmarks substantiate the efficacy of PRISM-MCTS. Notably, it halves the trajectory requirements on GPQA while surpassing MCTS-RAG and Search-o1, demonstrating that it scales inference by reasoning judiciously rather than exhaustively.