AI Navigate

Correction of Transformer-Based Models with Smoothing Pseudo-Projector

arXiv cs.LG / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • The pseudo-projector is a lightweight addition to existing transformer-based language models and neural networks designed to reduce noise sensitivity by correcting hidden representations.
  • Inspired by algebraic multigrid methods, the pseudo-projector acts similarly to an orthogonal projector but uses learnable operators rather than strict mathematical projection properties.
  • Experimental evaluations demonstrate that integrating the pseudo-projector improves training dynamics and robustness on both text classification tasks and synthetic benchmarks without negative side effects.
  • The approach is promising for broader application, with future work aiming to extend this correction method to larger language models.
  • The method provides a new direction for improving model stability and performance by addressing label-irrelevant input content during training.

Computer Science > Machine Learning

arXiv:2603.09815 (cs)
[Submitted on 10 Mar 2026]

Title:Correction of Transformer-Based Models with Smoothing Pseudo-Projector

View a PDF of the paper titled Correction of Transformer-Based Models with Smoothing Pseudo-Projector, by Vitaly Bulgakov
View PDF HTML (experimental)
Abstract:The pseudo-projector is a lightweight modification that can be integrated into existing language models and other neural networks without altering their core architecture. It can be viewed as a hidden-representation corrector that reduces sensitivity to noise by suppressing directions induced by label-irrelevant input content. The design is inspired by the multigrid (MG) paradigm, originally developed to accelerate the convergence of iterative solvers for partial differential equations and boundary value problems, and later extended to more general linear systems through algebraic multigrid methods. We refer to the method as a pseudo-projector because its linear prototype corresponds to a strictly idempotent orthogonal projector, whereas the practical formulation employs learnable restriction and prolongation operators and therefore does not, in general, satisfy the properties of an exact orthogonal projection. We evaluate the proposed approach on transformer-based text classification tasks, as well as controlled synthetic benchmarks, demonstrating its effectiveness in improving training dynamics and robustness. Experimental results, together with supporting theoretical heuristics, indicate consistent improvements in training behavior across a range of settings, with no adverse effects observed otherwise. Our next step will be to extend this approach to language models.
Comments:
Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI)
Cite as: arXiv:2603.09815 [cs.LG]
  (or arXiv:2603.09815v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2603.09815
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Vitaly Bulgakov [view email]
[v1] Tue, 10 Mar 2026 15:42:46 UTC (4,045 KB)
Full-text links:

Access Paper:

Current browse context:
cs.LG
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
IArxiv recommender toggle
IArxiv Recommender (What is IArxiv?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.