AI Navigate

LCA: Local Classifier Alignment for Continual Learning

arXiv cs.AI / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • The paper addresses the challenge of continual learning where models must adapt to new tasks without forgetting previous ones, a problem known as catastrophic forgetting.
  • It introduces Local Classifier Alignment (LCA), a novel loss function designed to better align task-specific classifiers with a continually adapted backbone, improving generalization and robustness.
  • The proposed method follows a model merging approach enhanced by LCA, outperforming existing state-of-the-art methods on multiple standard continual learning benchmarks.
  • Theoretical analysis supports that LCA not only helps classifiers generalize across all observed tasks but also enhances their robustness amid changing environments.
  • This work leverages pre-trained models to enable faster and more effective continual learning, especially as tasks and data distributions diverge over time.

Computer Science > Artificial Intelligence

arXiv:2603.09888 (cs)
[Submitted on 10 Mar 2026]

Title:LCA: Local Classifier Alignment for Continual Learning

View a PDF of the paper titled LCA: Local Classifier Alignment for Continual Learning, by Tung Tran and 2 other authors
View PDF HTML (experimental)
Abstract:A fundamental requirement for intelligent systems is the ability to learn continuously under changing environments. However, models trained in this regime often suffer from catastrophic forgetting. Leveraging pre-trained models has recently emerged as a promising solution, since their generalized feature extractors enable faster and more robust adaptation. While some earlier works mitigate forgetting by fine-tuning only on the first task, this approach quickly deteriorates as the number of tasks grows and the data distributions diverge. More recent research instead seeks to consolidate task knowledge into a unified backbone, or adapting the backbone as new tasks arrive. However, such approaches may create a (potential) \textit{mismatch} between task-specific classifiers and the adapted backbone. To address this issue, we propose a novel \textit{Local Classifier Alignment} (LCA) loss to better align the classifier with backbone. Theoretically, we show that this LCA loss can enable the classifier to not only generalize well for all observed tasks, but also improve robustness. Furthermore, we develop a complete solution for continual learning, following the model merging approach and using LCA. Extensive experiments on several standard benchmarks demonstrate that our method often achieves leading performance, sometimes surpasses the state-of-the-art methods with a large margin.
Subjects: Artificial Intelligence (cs.AI)
Cite as: arXiv:2603.09888 [cs.AI]
  (or arXiv:2603.09888v1 [cs.AI] for this version)
  https://doi.org/10.48550/arXiv.2603.09888
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Tung Tran [view email]
[v1] Tue, 10 Mar 2026 16:46:09 UTC (320 KB)
Full-text links:

Access Paper:

Current browse context:
cs.AI
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.