RASPRef: Retrieval-Augmented Self-Supervised Prompt Refinement for Large Reasoning Models
arXiv cs.CL / 3/31/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces RASPRef, a framework for Retrieval-Augmented Self-Supervised Prompt Refinement that optimizes prompts directly for large reasoning models rather than only improving outputs.
- RASPRef iteratively refines prompts by retrieving relevant examples and prior generated reasoning trajectories, then using self-supervised signals such as multi-sample consistency, verifier feedback, and model-generated critiques.
- Experiments on GSM8K-style mathematical reasoning tasks indicate that retrieval-guided prompting can outperform a static prompting baseline.
- The authors analyze how factors like retrieval quality, trajectory selection, and the choice of self-supervised feedback signals affect the effectiveness of prompt refinement.
- The work argues that prompt engineering remains a key performance lever for reasoning-focused LLMs and proposes a scalable, annotation-free method for improving prompts across tasks and domains.
Related Articles

Black Hat Asia
AI Business
[D] How does distributed proof of work computing handle the coordination needs of neural network training?
Reddit r/MachineLearning

Claude Code's Entire Source Code Was Just Leaked via npm Source Maps — Here's What's Inside
Dev.to

BYOK is not just a pricing model: why it changes AI product trust
Dev.to

AI Citation Registries and Identity Persistence Across Records
Dev.to