MoLoRA: Composable Specialization via Per-Token Adapter Routing
arXiv cs.CL / 3/18/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that traditional multi-adapter systems routing entire sequences to a single adapter fail for multimodal and mixed-capability tasks, and proposes per-token routing to assign tokens to domain-specific adapters.
- It introduces MoLoRA (Mixture of LoRA), a framework that loads multiple domain-specific adapters and uses a learned router to select the appropriate adapter for each token.
- Per-token routing is shown to be provably optimal, achieving work N for N tokens versus K·N for per-sequence routing with K adapters, and empirically enables smaller models to outperform larger ones on reasoning benchmarks (Qwen3-1.7B beats Qwen3-8B across four tasks while being 4.7x smaller).
- The approach enables modular, inference-time specialization: train focused LoRAs independently, compose them without retraining, and add new capabilities simply by loading new adapters.




