MTLSI-Net: A Linear Semantic Interaction Network for Parameter-Efficient Multi-Task Dense Prediction

arXiv cs.CV / 4/3/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces MTLSI-Net, a multi-task dense prediction architecture designed to improve global cross-task interaction without the quadratic cost of standard self-attention on high-resolution features.
  • MTLSI-Net uses linear-attention-style mechanisms via a shared global context matrix, aiming for cross-task dependency modeling with linear complexity and fewer parameters.
  • It proposes three main components: a multi-scale query linear fusion block for cross-task interaction across scales, a semantic token distiller to compress redundant information into compact tokens, and a cross-window integrated attention block to inject global semantics into local representations.
  • Experiments on NYUDv2 and PASCAL-Context report state-of-the-art performance, supporting both effectiveness (accuracy) and efficiency (compute/parameter reductions) for multi-task learning.

Abstract

Multi-task dense prediction aims to perform multiple pixel-level tasks simultaneously. However, capturing global cross-task interactions remains non-trivial due to the quadratic complexity of standard self-attention on high-resolution features. To address this limitation, we propose a Multi-Task Linear Semantic Interaction Network (MTLSI-Net), which facilitates cross-task interaction through linear attention. Specifically, MTLSI-Net incorporates three key components: a Multi-Task Multi-scale Query Linear Fusion Block, which captures cross-task dependencies across multiple scales with linear complexity using a shared global context matrix; a Semantic Token Distiller that compresses redundant features into compact semantic tokens, distilling essential cross-task knowledge; and a Cross-Window Integrated attention Block that injects global semantics into local features via a dual-branch architecture, preserving both global consistency and spatial precision. These components collectively enable the network to capture comprehensive cross-task interactions at linear complexity with reduced parameters. Extensive experiments on NYUDv2 and PASCAL-Context demonstrate that MTLSI-Net achieves state-of-the-art performance, validating its effectiveness and efficiency in multi-task learning.