Semi-Supervised Learning with Balanced Deep Representation Distributions

arXiv cs.LG / 3/24/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses semi-supervised text classification (SSTC) where self-training depends heavily on the accuracy of pseudo-labels for unlabeled data.
  • It identifies a “margin bias” issue stemming from mismatched representation (feature) distributions between labels in SSTC.
  • To reduce this bias, it introduces an angular margin loss and applies Gaussian linear transformations to balance the variance of label angles within each class.
  • The proposed method, S2TC-BDD, constrains label angle variances using estimates computed over both labeled and pseudo-labeled texts during self-training iterations.
  • Experiments on multi-class and multi-label settings show S2TC-BDD improves performance over state-of-the-art SSTC methods, with the largest gains when labeled data is scarce.

Abstract

Semi-Supervised Text Classification (SSTC) mainly works under the spirit of self-training. They initialize the deep classifier by training over labeled texts; and then alternatively predict unlabeled texts as their pseudo-labels and train the deep classifier over the mixture of labeled and pseudo-labeled texts. Naturally, their performance is largely affected by the accuracy of pseudo-labels for unlabeled texts. Unfortunately, they often suffer from low accuracy because of the margin bias problem caused by the large difference between representation distributions of labels in SSTC. To alleviate this problem, we apply the angular margin loss, and perform several Gaussian linear transformations to achieve balanced label angle variances, i.e., the variance of label angles of texts within the same label. More accuracy of predicted pseudo-labels can be achieved by constraining all label angle variances balanced, where they are estimated over both labeled and pseudo-labeled texts during self-training loops. With this insight, we propose a novel SSTC method, namely Semi-Supervised Text Classification with Balanced Deep representation Distributions (S2TC-BDD). We implement both multi-class classification and multi-label classification versions of S2TC-BDD by introducing some pseudo-labeling tricks and regularization terms. To evaluate S2 TC-BDD, we compare it against the state-of-the-art SSTC methods. Empirical results demonstrate the effectiveness of S2 TC-BDD, especially when the labeled texts are scarce.