AI Navigate

Expert Pyramid Tuning: Efficient Parameter Fine-Tuning for Expertise-Driven Task Allocation

arXiv cs.CL / 3/16/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Expert Pyramid Tuning (EPT) は、PEFT における多タスク適応を階層的に行う新しいアーキテクチャで、タスクの複雑さに応じた特徴表現を効率よく学習する。
  • EPT は共有メタ知識サブスペースと、学習可能なアップ投影機構を使って低次元から高次元の特徴を複数スケールで再構成するピラミッド投影を導入する。
  • タスク毎のルータがマルチスケール特徴の最適な組み合わせを動的に選択することで、個別タスクに適した適応を実現する。
  • 大規模な実験では、MoE-LoRA 系の最先端手法を上回り、パラメータの再パラメトリゼーションにより訓練パラメータ数を削減しつつ性能を向上させた。
  • この論文は arXiv:2603.12577v1 の新規論文として、PEFT の最新動向として位置づけられる。

Abstract

Parameter-Efficient Fine-Tuning (PEFT) has become a dominant paradigm for deploying LLMs in multi-task scenarios due to its extreme parameter efficiency. While Mixture-of-Experts (MoE) based LoRA variants have achieved promising results by dynamically routing tokens to different low-rank experts, they largely overlook the hierarchical nature of task complexity. Existing methods typically employ experts with uniform architectures, limiting their ability to capture diverse feature granularities required by distinct tasks--where some tasks demand high-level semantic abstraction while others require fine-grained syntactic manipulation. To bridge this gap, we propose Expert Pyramid Tuning (EPT), a novel architecture that integrates the multi-scale feature pyramid concept from computer vision into the realm of PEFT. Unlike standard LoRA, EPT decomposes task adaptation into two stages: (1) A shared meta-knowledge Subspace that encodes universal linguistic patterns in low dimensions; (2) A Pyramid Projection Mechanism that utilizes learnable up-projection operators to reconstruct high-dimensional features at varying scales. A task-aware router then dynamically selects the optimal combination of these multi-scale features. Extensive experiments across multiple multi-task benchmarks demonstrate that EPT significantly outperforms SOTA MoE-LoRA variants. Crucially, thanks to the re-parameterization capability of our design, EPT achieves this performance improvement while simultaneously reducing the number of training parameters.