AI Navigate

Rethinking Multiple-Choice Questions for RLVR: Unlocking Potential via Distractor Design

arXiv cs.CL / 3/16/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates how option design in RLVR-based MCQs affects model reasoning and vulnerability to reward hacking.
  • It shows that mismatches in training vs. testing option counts degrade performance, and strong distractors can enable effective RLVR even with 2-way questions.
  • A new framework, Iterative Distractor Curation (IDC), actively constructs high-quality distractors to block elimination shortcuts and promote deeper reasoning.
  • Experimental results across benchmarks demonstrate that IDC improves distractor quality and yields significant gains in RLVR training over the original data.

Abstract

Reinforcement Learning with Verifiable Rewards (RLVR) significantly enhances the reasoning capabilities of Large Language Models. When applied to RLVR, Multiple-Choice Questions (MCQs) offer a scalable source of verifiable data but risk inducing reward hacking, where models shortcut reasoning via random guessing or simple elimination. Current approaches often mitigate this by converting MCQs to open-ended formats, thereby discarding the contrastive signal provided by expert-designed distractors. In this work, we systematically investigate the impact of option design on RLVR. Our analysis highlights two primary insights: (1) Mismatches in option counts between training and testing degrade performance. (2) Strong distractors effectively mitigate random guessing, enabling effective RLVR training even with 2-way questions. Motivated by these findings, we propose Iterative Distractor Curation (IDC), a framework that actively constructs high-quality distractors to block elimination shortcuts and promote deep reasoning. Experiments on various benchmarks demonstrate that our method effectively enhances distractor quality and yields significant gains in RLVR training compared to the original data.