Frequency Switching Mechanism for Parameter-E!cient Multi-Task Learning

arXiv cs.CV / 3/24/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes Free Sinewich, a parameter-efficient multi-task learning framework that extends PEFT beyond single-task adaptation by enabling near-zero-cost weight modulation across tasks.
  • It introduces a Sine-AWB layer that merges low-rank factors with convolutional priors into a single kernel, then applies elementwise sinusoidal frequency-based modulation to generate task-specialized weights.
  • A lightweight Clock Net is used to generate bounded frequencies, aiming to stabilize sine modulation during training and improve training reliability.
  • The authors provide theoretical arguments that sine modulation increases the effective rank of low-rank adapters and that frequency separation reduces correlation between different task weights.
  • On dense prediction benchmarks, Free Sinewich reports state-of-the-art performance-efficiency trade-offs, including up to a +5.39% gain over single-task fine-tuning while using only 6.53M trainable parameters, and claims the approach is compact and scalable via frequency-based parameter sharing.

Abstract

Multi-task learning (MTL) aims to enable a single model to solve multiple tasks efficiently; however, current parameter-efficient fine-tuning (PEFT) methods remain largely limited to single-task adaptation. We introduce \textbf{Free Sinewich}, a parameter-efficient multi-task learning framework that enables near-zero-cost weight modulation via frequency switching (\textbf{Free}). Specifically, a \textbf{Sine-AWB (Sinewich)} layer combines low-rank factors and convolutional priors into a single kernel, which is then modulated elementwise by a sinusoidal transformation to produce task-specialized weights. A lightweight Clock Net is introduced to produce bounded frequencies that stabilize this modulation during training. Theoretically, sine modulation enhances the rank of low-rank adapters, while frequency separation decorrelates the weights of different tasks. On dense prediction benchmarks, Free Sinewich achieves state-of-the-art performance-efficiency trade-offs (e.g., up to +5.39\% improvement over single-task fine-tuning with only 6.53M trainable parameters), offering a compact and scalable paradigm based on frequency-based parameter sharing. Project page: \href{https://casperliuliuliu.github.io/projects/Free-Sinewich/}{https://casperliuliuliu.github.io/projects/Free-Sinewich}.