WIST: Web-Grounded Iterative Self-Play Tree for Domain-Targeted Reasoning Improvement

arXiv cs.AI / 3/25/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • WIST (Web-grounded Iterative Self-play Tree) proposes a reinforcement-learning framework for improving domain-targeted reasoning by learning directly from the open web without relying on a curated domain corpus.
  • The method incrementally builds a domain exploration tree, retrieves and cleans path-consistent web text to form a controllable training environment, then runs Challenger–Solver self-play using verifiable rewards.
  • WIST feeds back learnability signals to update node posteriors and uses an adaptive curriculum to guide subsequent exploration, aiming to prevent drift common in purely endogenous self-play.
  • Experiments across four model backbones show consistent gains over baseline models, with reported overall improvements up to about +9.8 (Qwen3-4B-Base) and +9.7 (OctoThinker-8B).
  • The approach is domain-steerable, yielding larger improvements in specialized areas (e.g., +14.79 in medicine for Qwen3-8B-Base), and ablations support the contribution of its core components; code is released on GitHub.

Abstract

Recent progress in reinforcement learning with verifiable rewards (RLVR) offers a practical path to self-improvement of language models, but existing methods face a key trade-off: endogenous self-play can drift over iterations, while corpus-grounded approaches rely on curated data environments. We present \textbf{WIST}, a \textbf{W}eb-grounded \textbf{I}terative \textbf{S}elf-play \textbf{T}ree framework for domain-targeted reasoning improvement that learns directly from the open web without requiring any pre-arranged domain corpus. WIST incrementally expands a domain tree for exploration, and retrieves and cleans path-consistent web corpus to construct a controllable training environment. It then performs Challenger--Solver self-play with verifiable rewards, and feeds learnability signals back to update node posteriors and guide subsequent exploration through an adaptive curriculum. Across four backbones, WIST consistently improves over the base models and typically outperforms both purely endogenous self-evolution and corpus-grounded self-play baselines, with the Overall gains reaching \textbf{+9.8} (\textit{Qwen3-4B-Base}) and \textbf{+9.7} (\textit{OctoThinker-8B}). WIST is also domain-steerable, improving \textit{Qwen3-8B-Base} by \textbf{+14.79} in medicine and \textit{Qwen3-4B-Base} by \textbf{+5.28} on PhyBench. Ablations further confirm the importance of WIST's key components for stable open-web learning. Our Code is available at https://github.com/lfy-123/WIST.