SCALE-LoRA: Auditing Post-Retrieval LoRA Composition with Residual Merging and View Reliability

arXiv cs.AI / 5/5/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses a key challenge in open-pool LoRA reuse: retrieving relevant LoRA adapters and composing them does not necessarily yield compatible parameter updates or reliable outputs for a new task with only a small support set.
  • It introduces SCALE (Sparse-Composition Agreement Layer), a post-retrieval auditing and composition framework that includes a deployable 1.0* merge path plus a more costly reliability-analysis layer based on multi-view disagreement.
  • The LASRC component reduces merge interference by keeping a linear anchor and residualizing block-wise adapter update directions, improving the stability of sparse residual composition.
  • The reliability layer treats disagreement across sparse composition “views” as an uncertainty signal and uses signals like agreement, a support-loss proxy for selection, and oracle headroom while accounting for explicit path costs.
  • Experiments on FLAN-T5-Large, BIG-Bench Hard, and a 97-LoRA setup show LASRC provides a directional single-view gain under fixed retrieval, and the authors report a SCALE-support variant that performs reliability analysis without requiring query labels, with consistent qualitative trends across additional decoder-only backbones.

Abstract

Libraries of Low-Rank Adaptation (LoRA) adapters are becoming a practical by-product of parameter-efficient adaptation. Once such adapters accumulate, a natural question is no longer how to train one adapter for one task, but how to reuse an open pool of adapters for a new task given only a small support set. Prior work has shown that LoRA modules can be composed at the task level and dynamically selected at the instance level. However, open-pool LoRA reuse is not automatic: retrieving relevant adapters does not guarantee that their parameter updates are compatible, and composing adapters does not guarantee reliable outputs. We introduce the Sparse-Composition Agreement Layer (SCALE), a post-retrieval audit and composition framework for open-pool LoRA reuse. SCALE contains a deployable 1.0* merge path, Layer-Adaptive Sparse Residual Composition (LASRC), and a higher-cost reliability-analysis layer for multi-view disagreement. LASRC addresses merge interference by preserving a linear anchor while residualizing block-wise adapter update directions. The reliability layer treats disagreement among sparse composition views as an observable uncertainty signal and compares agreement, support-loss proxy selection, and oracle headroom under explicit path cost. In matched FLAN-T5-Large, BIG-Bench Hard (BBH), and 97-LoRA experiments, LASRC gives a directional single-view gain under fixed retrieval, while SCALE-support is reported as a query-label-free 3.0* reliability-analysis variant rather than as a calibrated or throughput-equivalent selector. Protocol-distinct BBH-8 validation shows the same qualitative trend on three decoder-only backbones. Detailed scores, paired audits, and path-cost records are reported in the experimental section.