AI Navigate

MoE-SpAc: Efficient MoE Inference Based on Speculative Activation Utility in Heterogeneous Edge Scenarios

arXiv cs.AI / 3/12/2026

💬 OpinionDeveloper Stack & InfrastructureModels & Research

Key Points

  • MoE-SpAc addresses memory constraints for edge MoE inference by repurposing Speculative Decoding as a memory-aware lookahead mechanism.
  • It introduces a Speculative Utility Estimator to forecast expert demand and guide memory allocation and eviction decisions.
  • It employs a Heterogeneous Workload Balancer to partition computation via online integer optimization and an Asynchronous Execution Engine to synchronize prefetching and eviction in the same utility space.
  • Experimental results show a 42% improvement in throughput (TPS) over the state-of-the-art SD-based baseline and an average 4.04x speedup over standard baselines, with code available at GitHub.

Abstract

Mixture-of-Experts (MoE) models enable scalable performance but face severe memory constraints on edge devices. Existing offloading strategies struggle with I/O bottlenecks due to the dynamic, low-information nature of autoregressive expert activation. In this paper, we propose to repurpose Speculative Decoding (SD) not merely as a compute accelerator, but as an informative lookahead sensor for memory management, supported by our theoretical and empirical analyses. Hence, we introduce MoE-SpAc, an MoE inference framework that integrates a Speculative Utility Estimator to track expert demand, a Heterogeneous Workload Balancer to dynamically partition computation via online integer optimization, and an Asynchronous Execution Engine to unify the prefetching and eviction in the same utility space. Extensive experiments on seven benchmarks demonstrate that MoE-SpAc achieves a 42% improvement in TPS over the SOTA SD-based baseline, and an average 4.04x speedup over all standard baselines. Code is available at https://github.com/lshAlgorithm/MoE-SpAc .