Collapse-Free Prototype Readout Layer for Transformer Encoders

arXiv cs.LG / 4/7/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • DDCL-Attention introduces a prototype-based readout layer for transformer encoders that replaces mean pooling/class tokens with a learned compression scheme using global prototype vectors and soft token-to-prototype matching.
  • The approach is designed to prevent prototype collapse by decomposing the training loss into a reconstruction term and a diversity term, keeping prototypes distinct.
  • Stability of joint training with the encoder is supported theoretically using Tikhonov’s singular perturbation theory plus explicit learning-rate constraints tied to a practical timescale condition.
  • The same framework can be used in three ways: as a final readout layer, as a differentiable codebook related to VQ-VAE, and as a hierarchical document compressor.
  • Experiments on multiple datasets validate the loss decomposition and expected prototype separation dynamics, show full codebook utilization outperforming hard vector quantization, and include an orbital debris classification case suggesting broader applicability beyond typical NLP/vision.

Abstract

DDCL-Attention is a prototype-based readout layer for transformer encoders that replaces simple pooling methods, such as mean pooling or class tokens, with a learned compression mechanism. It uses a small set of global prototype vectors and assigns tokens to them through soft probabilistic matching, producing compact token summaries at linear complexity in sequence length. The method offers three main advantages. First, it avoids prototype collapse through an exact decomposition of the training loss into a reconstruction term and a diversity term, ensuring that prototypes remain distinct. Second, its joint training with the encoder is shown to be stable under a practical timescale condition, using Tikhonov's singular perturbation theory and explicit learning-rate constraints. Third, the same framework supports three uses: a final readout layer, a differentiable codebook extending VQ-VAE, and a hierarchical document compressor. Experiments on four datasets confirm the theoretical predictions: the loss decomposition holds exactly, prototype separation grows as expected when the stability condition is met, and the codebook reaches full utilization, outperforming standard hard vector quantization. An additional study on orbital debris classification shows that the method also applies beyond standard NLP and vision tasks, including scientific tabular data.