AI Navigate

Good Reasoning Makes Good Demonstrations: Implicit Reasoning Quality Supervision via In-Context Reinforcement Learning

arXiv cs.LG / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • Reinforcement Learning with Verifiable Rewards (RLVR) improves reasoning in large language models but does not differentiate between high-quality and low-quality correct solutions, potentially reinforcing flawed reasoning paths.
  • The paper introduces the concept of Demonstration Utility, highlighting that better reasoning solutions serve as more effective teaching demonstrations.
  • They propose Evidence Gain, a quality signal derived from the model's own in-context learning ability, to measure demonstration quality without external evaluators.
  • The new training method, In-Context RLVR, implicitly reweights rewards based on Evidence Gain using Bayesian analysis, improving reasoning accuracy and quality on mathematical benchmarks.
  • This approach enhances both solution correctness and underlying reasoning quality in large language models, offering a more nuanced reinforcement learning framework.

Computer Science > Machine Learning

arXiv:2603.09803 (cs)
[Submitted on 10 Mar 2026]

Title:Good Reasoning Makes Good Demonstrations: Implicit Reasoning Quality Supervision via In-Context Reinforcement Learning

View a PDF of the paper titled Good Reasoning Makes Good Demonstrations: Implicit Reasoning Quality Supervision via In-Context Reinforcement Learning, by Tiehua Mei and 7 other authors
View PDF HTML (experimental)
Abstract:Reinforcement Learning with Verifiable Rewards (RLVR) improves reasoning in large language models but treats all correct solutions equally, potentially reinforcing flawed traces that get correct answers by chance. We observe that better reasoning are better teachers: high-quality solutions serve as more effective demonstrations than low-quality ones. We term this teaching ability Demonstration Utility, and show that the policy model's own in-context learning ability provides an efficient way to measure it, yielding a quality signal termed Evidence Gain. To employ this signal during training, we introduce In-Context RLVR. By Bayesian analysis, we show that this objective implicitly reweights rewards by Evidence Gain, assigning higher weights to high-quality traces and lower weights to low-quality ones, without requiring costly computation or external evaluators. Experiments on mathematical benchmarks show improvements in both accuracy and reasoning quality over standard RLVR.
Subjects: Machine Learning (cs.LG)
Cite as: arXiv:2603.09803 [cs.LG]
  (or arXiv:2603.09803v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2603.09803
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Tiehua Mei [view email]
[v1] Tue, 10 Mar 2026 15:33:07 UTC (202 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled Good Reasoning Makes Good Demonstrations: Implicit Reasoning Quality Supervision via In-Context Reinforcement Learning, by Tiehua Mei and 7 other authors
  • View PDF
  • HTML (experimental)
  • TeX Source
Current browse context:
cs.LG
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
IArxiv recommender toggle
IArxiv Recommender (What is IArxiv?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.