CAP-CoT: Cycle Adversarial Prompt for Improving Chain of Thoughts in LLM Reasoning

arXiv cs.AI / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Chain-of-Thought (CoT) prompting can produce inconsistent step-by-step reasoning on long, multi-stage tasks, resulting in different answers across repeated runs even for the same problem.
  • The paper introduces CAP-CoT, a cycle-based “cycle adversarial prompt” framework that iteratively improves a single deployed LLM solver by generating candidate CoT chains, creating plausible-but-wrong challenger chains, and using a feedback agent to produce step-aligned corrections.
  • CAP-CoT updates both the solver prompt (based on errors revealed by the challenger) and the challenger prompt (to generate more targeted errors), forming a closed optimization loop across cycles.
  • Experiments on six benchmarks with four different LLM backbones show that CAP-CoT achieves lower run-to-run variability and higher reasoning accuracy within about two to three cycles, along with better robustness to prompt perturbations.
  • The adversarial challenger is designed to be task-semantic—aimed at exposing logical vulnerabilities in reasoning—rather than focusing on safety bypass techniques like jailbreaks or prompt injection.

Abstract

Chain-of-Thought (CoT) prompting has emerged as a simple and effective way to elicit step-by-step solutions from large language models (LLMs). However, CoT reasoning can be unstable across runs on long, multi-step problems, leading to inconsistent answers for unchanged task. Most prior work focuses on improving the forward reasoning chain within a single pass, with less attention to iterative and contrastive correction. To address this gap, we propose CAP-CoT, a Cycle Adversarial Prompt optimization framework designed to improve both CoT reasoning accuracy and stability of a single deployed solver. In each cycle, a forward solver generates candidate reasoning chains, an adversarial challenger constructs plausible but deliberately flawed chains using targeted error strategies, and a feedback agent contrasts the two chains and produces step-aligned structured feedback. This feedback closes the optimization loop in two directions, including updating the solver prompt based on errors exposed by the challenger, and updating the challenger prompt to generate increasingly targeted errors in subsequent cycles. Unlike safety-oriented adversarial prompting such as jailbreak or prompt-injection attacks, our adversarial component is task-semantic and aims to expose logical vulnerabilities in reasoning chains. Experiments across six benchmarks and four LLM backbones demonstrate that within two to three adversarial prompt optimization cycles, CAP-CoT consistently reduces variability across runs while improving reasoning accuracy and robustness to prompt perturbations.