Learning from Many and Adapting to the Unknown in Open-set Test Streams

arXiv cs.LG / 4/2/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that while LLMs generalize well in controlled settings, they often fail in deployment due to evolving tasks and continual distribution shift, and notes shortcomings in existing test-time adaptation (TTA) methods.
  • It proposes Synapse Consolidation (SyCo), a parameter-efficient adaptation approach that updates low-rank adapters using structured objectives and biological inspiration to preserve useful source knowledge.
  • SyCo uses Rac1 to restrict plasticity to a less source-critical tail-gradient subspace for rapid specialization, and MAPK with a tiered controller to reduce noise and consolidate reliable adaptations over non-stationary streams.
  • To better reflect real deployments, the authors introduce the Multi-source Open-set Adaptation (MOA) setting with multiple labeled source tasks and adaptation on open, unlabeled, non-stationary test streams mixing seen and unseen tasks.
  • Experiments across 18 NLP datasets in the MOA setting show SyCo outperforming strong baselines, reaching 78.31% on unseen-task adaptation and 85.37% on unseen-data shifts.

Abstract

Large Language Models (LLMs) generalize across tasks via reusable representations and flexible reasoning, yet remain brittle in real deployment under evolving tasks and continual distribution shift. A common approach is Test-Time Adaptation (TTA), existing ones of which updates models with hand-designed unsupervised objectives over the full parameter space and mostly overlook preserving shared source knowledge and the reliability of adaptation signals. Drawing on molecular signaling cascades of memory updating in Drosophila, we propose Synapse Consolidation (SyCo), a parameter-efficient LLM adaptation method that updates low-rank adapters through Rac1 and MAPK pathways under the guidance of a structured TTA objective driven by problem understanding, process understanding, and source-domain guardrail. Rac1 confines plasticity to a tail-gradient subspace that is less critical for source knowledge, enabling rapid specialization while preserving source representations. MAPK uses a tiered controller to suppress noisy updates and consolidate useful adaptations under non-stationary streams. To model real deployments with multiple sources and continually emerging tasks, we introduce Multi-source Open-set Adaptation (MOA) setting, where a model is trained on multiple labeled source tasks and then adapts on open, non-stationary unlabeled test streams that mix seen and unseen tasks with partial overlap in label and intent space. Across 18 NLP datasets and the MOA setting, SyCo consistently outperforms strong baselines, achieving 78.31\% on unseen-task adaptation and 85.37\% on unseen-data shifts.