Understand and Accelerate Memory Processing Pipeline for Disaggregated LLM Inference

arXiv cs.AI / 4/1/2026

💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents a unified four-step memory processing pipeline for disaggregated long-context LLM inference: Prepare Memory, Compute Relevancy, Retrieval, and Apply to Inference.
  • Profiling shows that memory processing can account for roughly 22%–97% of LLM inference overhead and exhibits strong heterogeneity across different operations.
  • It argues that heterogeneous systems are effective for accelerating memory processing by matching operation types to hardware.
  • The authors demonstrate a GPU–FPGA approach that offloads sparse/irregular/memory-bounded tasks to FPGAs while keeping compute-heavy work on GPUs.
  • In experiments on an AMD MI210 GPU and Alveo U55C FPGA (and with similar results on NVIDIA A100), the system achieves about 1.04–2.2× faster inference and 1.11–4.7× lower energy versus a GPU baseline across multiple LLM optimizations.

Abstract

Modern large language models (LLMs) increasingly depends on efficient long-context processing and generation mechanisms, including sparse attention, retrieval-augmented generation (RAG), and compressed contextual memory, to support complex reasoning. We show that these optimizations can be unified into a four-step memory processing pipeline: Prepare Memory, Compute Relevancy, Retrieval, and Apply to Inference. Through systematic profiling, we identify a 22%-97% memory processing overhead in LLM inference and strong heterogeneity in its computational characteristics. Motivated by this insight, we argue that \textbf{heterogeneous systems} are well-suited to accelerate memory processing and thus end-to-end inference. We demonstrate this approach on a GPU-FPGA system by offloading sparse, irregular, and memory-bounded operations to FPGAs while retaining compute-intensive operations on GPUs. Evaluated on an AMD MI210 GPU and an Alveo U55C FPGA, our system is 1.04\sim2.2\times faster and requires 1.11\sim4.7\times less energy across multiple LLM inference optimizations than the GPU baseline (similar results hold on NVIDIA A100). These results establish heterogeneous systems as a practical direction for efficient LLM memory processing and inform future heterogeneous hardware design.