A Training-Free Regeneration Paradigm: Contrastive Reflection Memory Guided Self-Verification and Self-Improvement

arXiv cs.CL / 3/24/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a training-free regeneration approach for LLM self-improvement that addresses the accuracy–efficiency trade-off in prior verification-rectification and best-of-N methods.
  • It uses an offline-curated contrastive Reflection Memory (RM) to provide corrective guidance during inference, combining RM-guided self-verification with a single RM-guided regeneration from scratch.
  • Regenerating from scratch is intended to escape faulty reasoning without relying on expensive iterative correction loops or large multi-sample selection.
  • Experiments across nine benchmarks (algorithmic, reasoning, symbolic, and domain-specific) on both small- and large-scale LLMs show improved performance over prior approaches while keeping computational cost low.

Abstract

Verification-guided self-improvement has recently emerged as a promising approach to improving the accuracy of large language model (LLM) outputs. However, existing approaches face a trade-off between inference efficiency and accuracy: iterative verification-rectification is computationally expensive and prone to being trapped in faulty reasoning, while best-of-N selection requires extensive sampling without addressing internal model flaws. We propose a training-free regeneration paradigm that leverages an offline-curated contrastive Reflection Memory (RM) to provide corrective guidance, while regenerating from scratch helps break out of faulty reasoning. At inference time, the method performs RM-guided self-verification followed by a single RM-guided regeneration, avoiding both iterative correction and multi-sample selection. We evaluated our method on nine benchmarks that span algorithmic, reasoning, symbolic, and domain-specific tasks in both small- and large-scale LLMs. Experiment results show that our method outperforms prior methods while maintaining low computational cost.