MobileLLM-Flash: Latency-Guided On-Device LLM Design for Industry Scale
arXiv cs.LG / 3/18/2026
📰 NewsDeveloper Stack & InfrastructureTools & Practical UsageModels & Research
Key Points
- MobileLLM-Flash introduces latency-guided hardware-in-the-loop architecture search to design on-device LLMs optimized for mobile latency, broad hardware compatibility, and industry-scale deployment without custom kernels.
- It yields a family of foundation models (350M, 650M, 1.4B) that support up to 8k context and achieve up to 1.8x prefill and 1.6x decode speedups on mobile CPUs with comparable or superior quality.
- The approach uses a staged evaluation: first training an accurate latency model, then performing Pareto-frontier search across latency and quality, while treating candidates as pruned versions of pretrained backbones with inherited weights to minimize retraining.
- It avoids specialized attention mechanisms by employing attention skipping for long-context acceleration and ensures deployment compatibility with standard mobile runtimes like Executorch.
- The work provides actionable principles for OD-LLM design and is positioned for industry-scale deployment of on-device models.




