AI Navigate

Evaluate-as-Action: 自己評価型プロセス報酬による検索拡張エージェント

arXiv cs.AI / 2026/3/11

Ideas & Deep AnalysisModels & Research

要点

  • 本論文では、検索拡張エージェントにおける検索の品質を明示的に評価し、多段推論の信頼性向上を図る手法EvalAct(Evaluate-as-Action)を提案する。
  • EvalActはSearch-to-Evaluateプロトコルを強制し、各検索行動の直後に構造化された評価スコアを付与することで、処理の信号をインタラクション経路に整合させる。
  • これらの評価信号を活用するために、著者らはProcess-Calibrated Advantage Rescaling(PCAR)を導入。セグメント単位でアドバンテージ値を再調整し、信頼できるステップに学習の重点を置き、不確実なステップは保守的に更新する最適化手法である。
  • 7つのオープンドメイン質問応答ベンチマークでの実験結果は、EvalActが平均精度で最高値を達成し、多段推論タスクで顕著な性能向上を示した。
  • アブレーションスタディでは、明示的な評価ループが性能向上の主因であることが確認され、PCARは一貫して補助的な効果をもたらした。

Computer Science > Artificial Intelligence

arXiv:2603.09203 (cs)
[Submitted on 10 Mar 2026]

Title:Evaluate-as-Action: Self-Evaluated Process Rewards for Retrieval-Augmented Agents

View a PDF of the paper titled Evaluate-as-Action: Self-Evaluated Process Rewards for Retrieval-Augmented Agents, by Jiangming Shu and Yuxiang Zhang and Ye Ma and Xueyuan Lin and Jitao Sang
View PDF HTML (experimental)
Abstract:Retrieval-augmented agents can query external evidence, yet their reliability in multi-step reasoning remains limited: noisy retrieval may derail multi-hop question answering, while outcome-only reinforcement learning provides credit signals that are too coarse to optimize intermediate steps. We propose \textsc{EvalAct} (Evaluate-as-Action), which converts implicit retrieval quality assessment into an explicit action and enforces a coupled Search-to-Evaluate protocol so that each retrieval is immediately followed by a structured evaluation score, yielding process signals aligned with the interaction trajectory. To leverage these signals, we introduce Process-Calibrated Advantage Rescaling (PCAR), a GRPO-based optimization method that rescales advantages at the segment level according to evaluation scores, emphasizing reliable segments while updating uncertain ones conservatively. Experiments on seven open-domain QA benchmarks show that \textsc{EvalAct} achieves the best average accuracy, with the largest gains on multi-hop tasks, and ablations verify that the explicit evaluation loop drives the primary improvements while PCAR provides consistent additional benefits.
Subjects: Artificial Intelligence (cs.AI)
Cite as: arXiv:2603.09203 [cs.AI]
  (or arXiv:2603.09203v1 [cs.AI] for this version)
  https://doi.org/10.48550/arXiv.2603.09203
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Jiangming Shu [view email]
[v1] Tue, 10 Mar 2026 05:22:40 UTC (501 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled Evaluate-as-Action: Self-Evaluated Process Rewards for Retrieval-Augmented Agents, by Jiangming Shu and Yuxiang Zhang and Ye Ma and Xueyuan Lin and Jitao Sang
  • View PDF
  • HTML (experimental)
  • TeX Source
Current browse context:
cs.AI
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.