Taming Asynchronous CPU-GPU Coupling for Frequency-aware Latency Estimation on Mobile Edge

arXiv cs.AI / 4/20/2026

💬 OpinionDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper addresses how to accurately estimate mobile edge model inference latency under DVFS, where CPU and GPU frequency changes make static profiling unreliable.
  • It argues that simple analytic scaling cannot capture latency variance because CPU and GPU operate with complex asynchronous coupling (CPU kernel launch vs GPU execution).
  • The proposed method, FLAME, uses layer-wise modeling to quantify overlap/parallelism and to account for pipeline bubbles from asynchronous interactions, then aggregates these effects across the full model.
  • FLAME can achieve accurate latency estimates across many CPU/GPU frequency combinations while requiring only sparse profiling samples, dramatically reducing profiling time for both DNNs and SLMs.
  • The authors demonstrate FLAME in deadline-aware DVFS, reporting better power efficiency and tighter latency guarantees than existing state-of-the-art methods.

Abstract

Precise estimation of model inference latency is crucial for time-critical mobile edge applications, enabling devices to calculate latency margins against deadlines and trade them for enhanced model performance or resource savings. However, the ubiquity of Dynamic Voltage and Frequency Scaling (DVFS) renders traditional static profiling invalid in real-world deployments, as inference latency fluctuates with varying processor (CPU and GPU) frequencies. While extensive profiling across frequency combinations is theoretically possible, it is prohibitively expensive, particularly for emerging Small Language Models (SLMs), where variable context lengths explode the profiling up to days. We observe that simple analytic scaling fails to predict these fluctuations due to the complex asynchronous coupling between CPU (kernel launching) and GPU (execution). In this paper, we introduce FLAME to accurately estimate inference latency across frequency combinations. It features a novel layer-wise modeling that quantifies the overlapping parallelism and then aggregates dynamic pipeline bubbles caused by asynchronous processor interactions when extending to the full model. This bottom-up approach ensures generalizability across diverse models from DNNs to SLMs, and its precise modeling allows for profiling a sparse subset of samples, cutting DNN profiling from hours to minutes and SLM profiling from days to mere minutes, while maintaining small estimation errors across frequencies. We further showcase FLAME's utility in a deadline-aware DVFS, outperforming the state-of-the-art approach in both power efficiency and latency guarantees.