DRTriton: Large-Scale Synthetic Data Reinforcement Learning for Triton Kernel Generation

arXiv cs.CL / 3/24/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces DRTriton, a framework that trains LLMs to translate PyTorch reference code into optimized Triton kernels compiled to CUDA at runtime, targeting a key pain point in generative AI engineering: efficient CUDA kernel creation.
  • DRTriton uses a synthetic data strategy (CSP-DAG) designed to ensure broad coverage of the operator space with unbiased uniform sampling and controlled task difficulty.
  • It applies curriculum reinforcement learning with decoupled rewards to jointly improve conversion success rate and inference speed, then adds a test-time search method to further boost runtime performance.
  • Although trained only on synthetic data, DRTriton is reported to generalize well to difficult real-world CUDA kernels, including cases challenging for expert engineers.
  • In experiments, DRTriton-7B delivers speedups on 92% of KernelBench Level 2, substantially outperforming GPT-5.2 (23%) and Claude-Sonnet-4.5 (19%).

Abstract

Developing efficient CUDA kernels is a fundamental yet challenging task in the generative AI industry. Recent researches leverage Large Language Models (LLMs) to automatically convert PyTorch reference implementations to CUDA kernels, significantly reducing the engineering efforts. State-of-the-art LLMs, such as GPT-5.2 and Claude-Sonnet-4.5, still struggle in this specific task. To address this challenge, we propose DRTriton, a scalable learning framework for training LLMs to convert PyTorch codes into highly optimized Triton kernels, which are then compiled to CUDA kernels at runtime. DRTriton consists of three key components: (i) a data synthetic algorithm CSP-DAG that guarantees full coverage and unbiased uniform sampling over the operator space with controlled difficulty; (ii) a curriculum reinforcement learning with decoupled reward efficiently optimizes conversion success rate and inference speed simultaneously; and (iii) a test-time search algorithm that further improves the inference speed of the generated Triton kernels. Notably, despite being trained exclusively on synthetic data, DRTriton generalizes effectively to real-world CUDA kernels that are challenging even for human experts. Experimental results show that DRTriton-7B achieves speedup on 92% of the KernelBench Level 2, compared to 23% for GPT-5.2 and 19% for Claude-Sonnet-4.5.