Quantization with Unified Adaptive Distillation to enable multi-LoRA based one-for-all Generative Vision Models on edge

arXiv cs.CV / 4/1/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a unified edge-deployment framework for multi-task generative vision models that uses LoRA weights as runtime inputs, enabling dynamic task switching without recompiling separate binaries per adapter.
  • It introduces QUAD (Quantization with Unified Adaptive Distillation), a quantization-aware training method that aligns multiple LoRA adapters under a shared quantization profile for efficient on-device execution.
  • A lightweight mobile runtime stack is implemented to be compatible with mobile NPUs and is evaluated across multiple edge chipsets.
  • Experiments report up to 6x memory footprint reduction and 4x latency improvements while preserving strong visual quality across several GenAI tasks.

Abstract

Generative Artificial Intelligence (GenAI) features such as image editing, object removal, and prompt-guided image transformation are increasingly integrated into mobile applications. However, deploying Large Vision Models (LVMs) for such tasks on resource-constrained devices remains challenging due to their high memory and compute requirements. While Low-Rank Adapters (LoRAs) enable parameter-efficient task adaptation, existing Mobile deployment pipelines typically compile separate model binaries for each LoRA + a copy of the foundation model, resulting in redundant storage and increased runtime overhead. In this work, we present a unified framework for enabling multi-task GenAI inference on edge devices using a single shared model. Our key idea is to treat LoRA weights as runtime inputs rather than embedding them into the compiled model graph, allowing dynamic task switching at runtime without recompilation. Then, to support efficient on-device execution, we introduce QUAD (Quantization with Unified Adaptive Distillation), a quantizationaware training strategy that aligns multiple LoRA adapters under a shared quantization profile. We implement the proposed system with a lightweight runtime stack compatible with mobile NPUs and evaluate it across multiple chipsets. Experimental results demonstrate up to 6x and 4x reduction in memory footprint and latency improvements, respectively, while maintaining high visual quality across multiple GenAI tasks.