SUPERNOVA: Eliciting General Reasoning in LLMs with Reinforcement Learning on Natural Instructions
arXiv cs.CL / 4/13/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes SUPERNOVA, a data curation framework that extends Reinforcement Learning with Verifiable Rewards (RLVR) from formal reasoning (math/code) to more general reasoning involving causal and temporal understanding.
- It argues that the main bottleneck for general RLVR is scarce high-quality, verifiable training data, and introduces an approach to adapt expert-annotated instruction-tuning datasets into RLVR-ready training signals.
- Across 100+ controlled reinforcement learning experiments, the authors analyze how data design choices—source task selection, task mixing strategies, and synthetic interventions—affect downstream reasoning performance.
- Results show that source task selection is crucial, and selecting tasks based on performance on the specific target task beats approaches that rely on overall average performance.
- Models trained with SUPERNOVA outperform strong baselines (e.g., Qwen3.5) on benchmarks such as BBEH, Zebralogic, and MMLU-Pro, achieving up to 52.8% relative improvement on BBEH, and the code/data are released on GitHub.
Related Articles

Black Hat Asia
AI Business

Apple is building smart glasses without a display to serve as an AI wearable
THE DECODER

Why Fashion Trend Prediction Isn’t Enough Without Generative AI
Dev.to
Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to
Chatbot vs Voicebot: The Real Business Decision Nobody Talks About
Dev.to