Marco DeepResearch: Unlocking Efficient Deep Research Agents via Verification-Centric Design

arXiv cs.CL / 3/31/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • Marco DeepResearch is a deep research agent designed for long-horizon, open-ended investigations that relies on explicit verification to prevent error propagation.
  • The approach improves QA data synthesis, trajectory construction, and inference-time behavior by embedding verification mechanisms at each stage.
  • It uses Marco DeepResearch itself as a verifier during test-time scaling to boost performance on difficult questions.
  • Experiments on benchmarks like BrowseComp and BrowseComp-ZH show it significantly outperforms 8B-scale deep research agents and can surpass or approach some 30B-scale systems within a 600 tool-call budget.

Abstract

Deep research agents autonomously conduct open-ended investigations, integrating complex information retrieval with multi-step reasoning across diverse sources to solve real-world problems. To sustain this capability on long-horizon tasks, reliable verification is critical during both training and inference. A major bottleneck in existing paradigms stems from the lack of explicit verification mechanisms in QA data synthesis, trajectory construction, and test-time scaling. Errors introduced at each stage propagate downstream and degrade the overall agent performance. To address this, we present Marco DeepResearch, a deep research agent optimized with a verification-centric framework design at three levels: \textbf{(1)~QA Data Synthesis:} We introduce verification mechanisms to graph-based and agent-based QA synthesis to control question difficulty while ensuring answers are unique and correct; \textbf{(2)~Trajectory Construction:} We design a verification-driven trajectory synthesis method that injects explicit verification patterns into training trajectories; and \textbf{(3)~Test-time scaling:} We use Marco DeepResearch itself as a verifier at inference time and effectively improve performance on challenging questions. Extensive experimental results demonstrate that our proposed Marco DeepResearch agent significantly outperforms 8B-scale deep research agents on most challenging benchmarks, such as BrowseComp and BrowseComp-ZH. Crucially, under a maximum budget of 600 tool calls, Marco DeepResearch even surpasses or approaches several 30B-scale agents, like Tongyi DeepResearch-30B.