AI Navigate

MoLoRA: Composable Specialization via Per-Token Adapter Routing

arXiv cs.CL / 3/18/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that traditional multi-adapter systems routing entire sequences to a single adapter fail for multimodal and mixed-capability tasks, and proposes per-token routing to assign tokens to domain-specific adapters.
  • It introduces MoLoRA (Mixture of LoRA), a framework that loads multiple domain-specific adapters and uses a learned router to select the appropriate adapter for each token.
  • Per-token routing is shown to be provably optimal, achieving work N for N tokens versus K·N for per-sequence routing with K adapters, and empirically enables smaller models to outperform larger ones on reasoning benchmarks (Qwen3-1.7B beats Qwen3-8B across four tasks while being 4.7x smaller).
  • The approach enables modular, inference-time specialization: train focused LoRAs independently, compose them without retraining, and add new capabilities simply by loading new adapters.

Abstract

Multi-adapter serving systems route entire sequences to a single adapter, forcing a choice when requests span multiple domains. This assumption fails in two important settings: (1) multimodal generation, where text and image tokens require different adapters within the same sequence, and (2) mixed-capability requests like "write code to solve this equation," which need expertise from multiple specialized adapters. We introduce per-token routing, which routes individual tokens to adapters based on either vocabulary structure (for multimodal models) or learned gating (for semantic specialization). Per-token routing is provably optimal, achieving work N for N tokens versus K \cdot N for per-sequence routing with K adapter types. Our key contribution is MoLoRA (Mixture of LoRA), which enables composable specialization: load multiple domain-specific adapters and let a learned router select the appropriate adapter per-token. We demonstrate that specialization dramatically beats scale: MoLoRA enables Qwen3-1.7B to exceed Qwen3-8B across four reasoning benchmarks while being 4.7x smaller. This enables modular expertise at inference time: train focused LoRAs independently, combine them without retraining, and add new capabilities by simply loading new adapters.