APreQEL: Adaptive Mixed Precision Quantization For Edge LLMs

arXiv cs.LG / 3/26/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses the challenge of deploying large language models on edge devices by reducing memory and compute costs through quantization without uniformly applying a single precision to all layers.
  • It argues that different model layers react differently to reduced precision and that memory usage and compute throughput do not always correlate, making deployment trade-offs more complex than standard approaches.
  • APreQEL introduces adaptive mixed-precision quantization that selects an appropriate quantization type per layer based on layer-wise contribution and hardware-specific behavior.
  • The method aims to jointly balance memory, latency, and accuracy under user-defined priorities, producing configurations that uniform quantization cannot achieve.
  • Overall, the work expands the design space for efficient edge LLM deployment by respecting both layer importance and end-to-end performance trade-offs.

Abstract

Today, large language models have demonstrated their strengths in various tasks ranging from reasoning, code generation, and complex problem solving. However, this advancement comes with a high computational cost and memory requirements, making it challenging to deploy these models on edge devices to ensure real-time responses and data privacy. Quantization is one common approach to reducing memory use, but most methods apply it uniformly across all layers. This does not account for the fact that different layers may respond differently to reduced precision. Importantly, memory consumption and computational throughput are not necessarily aligned, further complicating deployment decisions. This paper proposes an adaptive mixed precision quantization mechanism that balances memory, latency, and accuracy in edge deployment under user-defined priorities. This is achieved by analyzing the layer-wise contribution and by inferring how different quantization types behave across the target hardware platform in order to assign the most suitable quantization type to each layer. This integration ensures that layer importance and the overall performance trade-offs are jointly respected in this design. Our work unlocks new configuration designs that uniform quantization cannot achieve, expanding the solution space to efficiently deploy the LLMs on resource-constrained devices.