Hybrid JIT-CUDA Graph Optimization for Low-Latency Large Language Model Inference

arXiv cs.LG / 4/28/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper proposes a hybrid inference runtime that combines Just-In-Time (JIT) compilation with CUDA Graph execution to lower GPU kernel launch overhead in low-latency LLM serving.
  • It splits transformer inference into static parts run via CUDA Graph replay and dynamic parts compiled on-the-fly with JIT kernels, keeping flexibility during autoregressive decoding.
  • The framework supports asynchronous graph capture and reuse across decoding steps, aiming to reduce both latency and variability.
  • Experiments on LLaMA-2 7B (single GPU, batch size 1) for 10–500 token prompts show up to a 66.0% reduction in Time-to-First-Token (TTFT) versus TensorRT-LLM, along with improved P99 latency.
  • The authors conclude this hybrid approach is particularly effective for short-sequence, interactive workloads where latency sensitivity is critical.

Abstract

Large Language Models (LLMs) have achieved strong performance across natural language and multimodal tasks, yet their practical deployment remains constrained by inference latency and kernel launch overhead, particularly in interactive, short-sequence settings. This paper presents a hybrid runtime framework that combines Just-In-Time (JIT) compilation with CUDA Graph execution to reduce launch overhead while preserving runtime flexibility during autoregressive decoding. The framework partitions transformer inference into static components executed via CUDA Graph replay and dynamic components handled through JIT-compiled kernels, enabling asynchronous graph capture and reuse across decoding steps. We evaluate the proposed approach on LLaMA-2 7B using single-GPU, batch-size-one inference across prompt lengths from 10 to 500 tokens. Experimental results show that the hybrid runtime reduces Time-to-First-Token (TTFT) by up to 66.0% and achieves lower P99 latency compared with TensorRT-LLM in this regime. These results indicate that hybrid JIT-CUDA Graph execution can effectively reduce inference latency and variance for short-sequence LLM workloads, making it a practical optimization strategy for latency-sensitive AI applications.