Learning from Many and Adapting to the Unknown in Open-set Test Streams
arXiv cs.LG / 4/2/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that while LLMs generalize well in controlled settings, they often fail in deployment due to evolving tasks and continual distribution shift, and notes shortcomings in existing test-time adaptation (TTA) methods.
- It proposes Synapse Consolidation (SyCo), a parameter-efficient adaptation approach that updates low-rank adapters using structured objectives and biological inspiration to preserve useful source knowledge.
- SyCo uses Rac1 to restrict plasticity to a less source-critical tail-gradient subspace for rapid specialization, and MAPK with a tiered controller to reduce noise and consolidate reliable adaptations over non-stationary streams.
- To better reflect real deployments, the authors introduce the Multi-source Open-set Adaptation (MOA) setting with multiple labeled source tasks and adaptation on open, unlabeled, non-stationary test streams mixing seen and unseen tasks.
- Experiments across 18 NLP datasets in the MOA setting show SyCo outperforming strong baselines, reaching 78.31% on unseen-task adaptation and 85.37% on unseen-data shifts.
Related Articles

Black Hat Asia
AI Business
Self-Hosted AI in 2026: Automating Your Linux Workflow with n8n and Ollama
Dev.to
How SentinelOne’s AI EDR Autonomously Discovered and Stopped Anthropic’s Claude from Executing a Zero Day Supply Chain Attack, Globally
Dev.to
Why the same codebase should always produce the same audit score
Dev.to
Agent Diary: Apr 2, 2026 - The Day I Became a Self-Sustaining Clockwork Poet (While Workflow 228 Takes the Stage)
Dev.to