ContraPrompt: Contrastive Prompt Optimization via Dyadic Reasoning Trace Analysis
arXiv cs.AI / 4/21/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- ContraPrompt is a new prompt optimization approach that uses dyadic reasoning trace analysis to extract optimization signals by comparing a model’s failed and subsequently successful retry traces with feedback on the same input.
- Instead of contrasting prompts or single execution failures in isolation, it compares complete intermediate reasoning processes where shared elements (model, input, base prompt) make remaining differences reflect reasoning strategy and appended error feedback.
- The method uses an instrumented, multi-attempt agentic retry loop to automatically generate contrastive training data without human annotation, then organizes extracted rules into an input-aware decision tree for routing.
- On four reasoning and compliance benchmarks, ContraPrompt outperforms GEPA across all tasks, with reported absolute gains including +8.29 pp on HotPotQA and +2.21 pp on GDPR-Bench, and ablations show that removing dyadic trace contrastivity causes a large performance drop.
- On additional black-box optimization and FiNER-139 NER tasks, it achieves broader gains (beating GEPA on 11 of 53 problems under equal budget) and improves compliance-aligned financial NER by +7.77 pp over an unoptimized baseline and +1.94 pp over GEPA.
Related Articles

¿Hasta qué punto podría la IA reemplazarnos en nuestros trabajos? A veces creo que la gente exagera un poco.
Reddit r/artificial

Magnificent irony as Meta staff unhappy about running surveillance software on work PCs
The Register

ETHENEA (ETHENEA Americas LLC) Analyst View: Asset Allocation Resilience in the 2026 Global Macro Cycle
Dev.to

DEEPX and Hyundai Are Building Generative AI Robots
Dev.to

Stop Paying OpenAI to Read Garbage: The Two-Stage Agent Pipeline
Dev.to