Parameter-efficient Quantum Multi-task Learning

arXiv cs.LG / 4/16/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a parameter-efficient Quantum Multi-task Learning (QMTL) framework that swaps conventional task-specific linear heads for a hybrid fully quantum prediction head using variational quantum circuits (VQCs).
  • It uses a shared, task-independent quantum encoding stage plus lightweight, task-specific ansatz blocks to preserve specialization while keeping head parameters compact.
  • A theoretical parameter-scaling analysis suggests the proposed quantum head scales linearly with the number of tasks, while a comparable standard classical head grows quadratically under a controlled, capacity-matched setup.
  • Experiments on three multi-task benchmarks across NLP, medical imaging, and multimodal sarcasm detection show performance on par with or better than classical hard-parameter-sharing baselines, and better results than prior hybrid quantum MTL models with far fewer head parameters.
  • The authors demonstrate feasibility by running QMTL on noisy simulators and real quantum hardware, supporting practical executability despite quantum noise constraints.

Abstract

Multi-task learning (MTL) improves generalization and data efficiency by jointly learning related tasks through shared representations. In the widely used hard-parameter-sharing setting, a shared backbone is combined with task-specific prediction heads. However, task-specific parameters can grow rapidly with the number of tasks. Therefore, designing multi-task heads that preserve task specialization while improving parameter efficiency remains a key challenge. In Quantum Machine Learning (QML), variational quantum circuits (VQCs) provide a compact mechanism for mapping classical data to quantum states residing in high-dimensional Hilbert spaces, enabling expressive representations within constrained parameter budgets. We propose a parameter-efficient quantum multi-task learning (QMTL) framework that replaces conventional task-specific linear heads with a fully quantum prediction head in a hybrid architecture. The model consists of a VQC with a shared, task-independent quantum encoding stage, followed by lightweight task-specific ansatz blocks enabling localized task adaptation while maintaining compact parameterization. Under a controlled and capacity-matched formulation where the shared representation dimension grows with the number of tasks, our parameter-scaling analysis demonstrates that a standard classical head exhibits quadratic growth, whereas the proposed quantum head parameter cost scales linearly. We evaluate QMTL on three multi-task benchmarks spanning natural language processing, medical imaging, and multimodal sarcasm detection, where we achieve performance comparable to, and in some cases exceeding, classical hard-parameter-sharing baselines while consistently outperforming existing hybrid quantum MTL models with substantially fewer head parameters. We further demonstrate QMTL's executability on noisy simulators and real quantum hardware, illustrating its feasibility.