AI Navigate

MobileLLM-Flash: Latency-Guided On-Device LLM Design for Industry Scale

arXiv cs.LG / 3/18/2026

📰 NewsDeveloper Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • MobileLLM-Flash introduces latency-guided hardware-in-the-loop architecture search to design on-device LLMs optimized for mobile latency, broad hardware compatibility, and industry-scale deployment without custom kernels.
  • It yields a family of foundation models (350M, 650M, 1.4B) that support up to 8k context and achieve up to 1.8x prefill and 1.6x decode speedups on mobile CPUs with comparable or superior quality.
  • The approach uses a staged evaluation: first training an accurate latency model, then performing Pareto-frontier search across latency and quality, while treating candidates as pruned versions of pretrained backbones with inherited weights to minimize retraining.
  • It avoids specialized attention mechanisms by employing attention skipping for long-context acceleration and ensures deployment compatibility with standard mobile runtimes like Executorch.
  • The work provides actionable principles for OD-LLM design and is positioned for industry-scale deployment of on-device models.

Abstract

Real-time AI experiences call for on-device large language models (OD-LLMs) optimized for efficient deployment on resource-constrained hardware. The most useful OD-LLMs produce near-real-time responses and exhibit broad hardware compatibility, maximizing user reach. We present a methodology for designing such models using hardware-in-the-loop architecture search under mobile latency constraints. This system is amenable to industry-scale deployment: it generates models deployable without custom kernels and compatible with standard mobile runtimes like Executorch. Our methodology avoids specialized attention mechanisms and instead uses attention skipping for long-context acceleration. Our approach jointly optimizes model architecture (layers, dimensions) and attention pattern. To efficiently evaluate candidates, we treat each as a pruned version of a pretrained backbone with inherited weights, thereby achieving high accuracy with minimal continued pretraining. We leverage the low cost of latency evaluation in a staged process: learning an accurate latency model first, then searching for the Pareto-frontier across latency and quality. This yields MobileLLM-Flash, a family of foundation models (350M, 650M, 1.4B) for efficient on-device use with strong capabilities, supporting up to 8k context length. MobileLLM-Flash delivers up to 1.8x and 1.6x faster prefill and decode on mobile CPUs with comparable or superior quality. Our analysis of Pareto-frontier design choices offers actionable principles for OD-LLM design.