TR-ICRL: Test-Time Rethinking for In-Context Reinforcement Learning

arXiv cs.CL / 4/3/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces TR-ICRL, a test-time framework for In-Context Reinforcement Learning (ICRL) that tackles the key challenge of reward estimation without ground-truth labels during inference.
  • TR-ICRL retrieves relevant unlabeled instances for a query, generates candidate answers per instance, and derives pseudo-labels via majority voting to synthesize reward signals and formative feedback for iterative refinement.
  • The method merges the synthesized contextual information with the original query and selects the final answer through an additional majority-voting step.
  • Experiments on reasoning and knowledge-intensive benchmarks report substantial gains, including an average 21.23% improvement on MedQA and a 137.59% improvement on AIME2024 for Qwen2.5-7B.
  • The authors provide extensive ablation studies and analyses, and release code for replication and further experimentation.

Abstract

In-Context Reinforcement Learning (ICRL) enables Large Language Models (LLMs) to learn online from external rewards directly within the context window. However, a central challenge in ICRL is reward estimation, as models typically lack access to ground-truths during inference. To address this limitation, we propose Test-Time Rethinking for In-Context Reinforcement Learning (TR-ICRL), a novel ICRL framework designed for both reasoning and knowledge-intensive tasks. TR-ICRL operates by first retrieving the most relevant instances from an unlabeled evaluation set for a given query. During each ICRL iteration, LLM generates a set of candidate answers for every retrieved instance. Next, a pseudo-label is derived from this set through majority voting. This label then serves as a proxy to give reward messages and generate formative feedbacks, guiding LLM through iterative refinement. In the end, this synthesized contextual information is integrated with the original query to form a comprehensive prompt, with the answer determining through a final round of majority voting. TR-ICRL is evaluated on mainstream reasoning and knowledge-intensive tasks, where it demonstrates significant performance gains. Remarkably, TR-ICRL improves Qwen2.5-7B by 21.23% on average on MedQA and even 137.59% on AIME2024. Extensive ablation studies and analyses further validate the effectiveness and robustness of our approach. Our code is available at https://github.com/pangpang-xuan/TR_ICRL.