Rocks, Pebbles and Sand: Modality-aware Scheduling for Multimodal Large Language Model Inference

arXiv cs.AI / 3/30/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that multimodal LLM inference workloads (text, images, videos) have drastically different resource needs, leading to latency spikes and head-of-line blocking when served by text-optimized systems.
  • It introduces a simple “modality as workload size” abstraction—videos as rocks, images as pebbles, and text as sand—to guide scheduling decisions.
  • RPS-Serve is proposed as a modality-aware scheduler that classifies requests, dynamically prioritizes them, and uses aging to prevent starvation of heavier workloads.
  • Experiments on state-of-the-art MLLMs show RPS-Serve reduces average time-to-first-token (TTFT) by 54% overall and by 78.5% for latency-critical requests versus current serving systems.
  • The study frames the result as achieving more “LLM-like” interactive responsiveness for multimodal LLMs while improving overall resource utilization.

Abstract

Multimodal Large Language Models (MLLMs) power platforms like ChatGPT, Gemini, and Copilot, enabling richer interactions with text, images, and videos. These heterogeneous workloads introduce additional inference stages, such as vision preprocessing and encoding, that inflate latency and memory demand. Existing LLM serving systems, optimized for text-only workloads, fail under multimodality: large requests (e.g., videos) monopolize resources, causing severe head-of-line blocking and performance degradation. Our key insight is that multimodal requests differ by orders of magnitude in resource demands, which we capture through a simple abstraction: videos behave like rocks, images like pebbles, and text like sand. We design RPS-Serve, a modality-aware scheduler that lets sand flow quickly through pebbles and rocks, ensuring interactive responsiveness while avoiding starvation. RPS-Serve classifies requests, prioritizes them dynamically, and applies aging to avoid starvation. Evaluation across state-of-the-art MLLMs shows that RPS-Serve reduces, on average, time-to-first-token (TTFT) by 54% overall, and by 78.5% for latency-critical requests, compared to current systems. RPS-Serve delivers LLM-like responsiveness for MLLMs, with modality-aware scheduling and by making the most efficient use of the available resources.