Dual Consensus: Escaping from Spurious Majority in Unsupervised RLVR via Two-Stage Vote Mechanism
arXiv cs.LG / 3/18/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- Dual Consensus Reinforcement Learning (DCRL) is proposed as a self-supervised training method to mitigate convergence on spurious majority in unsupervised RLVR for large language models.
- It introduces a two-stage vote mechanism where the model first acts as an anchor to produce dominant responses and then as an explorer to generate diverse auxiliary signals via a temporary unlearning process.
- The final training target is the harmonic mean of the anchor and explorer signals, and the approach operates without external models or supervision.
- Across eight benchmarks, DCRL improves Pass@1 over majority vote and yields more stable training dynamics, indicating a scalable path for stronger reasoning without labeled data.




