AI Navigate

AdaFuse: Accelerating Dynamic Adapter Inference via Token-Level Pre-Gating and Fused Kernel Optimization

arXiv cs.AI / 3/13/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • AdaFuse targets the latency bottleneck of dynamic adapters by showing that the overhead comes from fragmented CUDA kernel launches rather than the core computations.
  • It introduces a token-level pre-gating strategy that makes a single global routing decision for all adapter layers, effectively fixing the execution path per token.
  • This enables a fused CUDA kernel that merges all selected LoRA adapters into the backbone model in one efficient pass.
  • Experimental results on popular open-source LLMs show comparable accuracy to state-of-the-art dynamic adapters while achieving a decoding latency reduction of over 2.4x.
  • The work demonstrates a hardware–software co-design approach to improve inference efficiency without sacrificing model capability.

Abstract

The integration of dynamic, sparse structures like Mixture-of-Experts (MoE) with parameter-efficient adapters (e.g., LoRA) is a powerful technique for enhancing Large Language Models (LLMs). However, this architectural enhancement comes at a steep cost: despite minimal increases in computational load, the inference latency often skyrockets, leading to decoding speeds slowing by over 2.5 times. Through a fine-grained performance analysis, we pinpoint the primary bottleneck not in the computation itself, but in the severe overhead from fragmented, sequential CUDA kernel launches required for conventional dynamic routing. To address this challenge, we introduce AdaFuse, a framework built on a tight co-design between the algorithm and the underlying hardware system to enable efficient dynamic adapter execution. Departing from conventional layer-wise or block-wise routing, AdaFuse employs a token-level pre-gating strategy, which makes a single, global routing decision for all adapter layers before a token is processed. This "decide-once, apply-everywhere" approach effectively staticizes the execution path for each token, creating an opportunity for holistic optimization. We capitalize on this by developing a custom CUDA kernel that performs a fused switching operation, merging the parameters of all selected LoRA adapters into the backbone model in a single, efficient pass. Experimental results on popular open-source LLMs show that AdaFuse achieves accuracy on par with state-of-the-art dynamic adapters while drastically cutting decoding latency by a factor of over 2.4x, thereby bridging the gap between model capability and inference efficiency.