MTLSI-Net: A Linear Semantic Interaction Network for Parameter-Efficient Multi-Task Dense Prediction
arXiv cs.CV / 4/3/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces MTLSI-Net, a multi-task dense prediction architecture designed to improve global cross-task interaction without the quadratic cost of standard self-attention on high-resolution features.
- MTLSI-Net uses linear-attention-style mechanisms via a shared global context matrix, aiming for cross-task dependency modeling with linear complexity and fewer parameters.
- It proposes three main components: a multi-scale query linear fusion block for cross-task interaction across scales, a semantic token distiller to compress redundant information into compact tokens, and a cross-window integrated attention block to inject global semantics into local representations.
- Experiments on NYUDv2 and PASCAL-Context report state-of-the-art performance, supporting both effectiveness (accuracy) and efficiency (compute/parameter reductions) for multi-task learning.
Related Articles

Why I built an AI assistant that doesn't know who you are
Dev.to

DenseNet Paper Walkthrough: All Connected
Towards Data Science

Meta Adaptive Ranking Model: What Instagram Advertisers Gain in 2026 | MKDM
Dev.to

The Facebook insider building content moderation for the AI era
TechCrunch
Qwen3.5 vs Gemma 4: Benchmarks vs real world use?
Reddit r/LocalLLaMA