Scaling Probabilistic Transformer via Efficient Cross-Scale Hyperparameter Transfer

arXiv cs.CL / 4/29/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a scaling method for the Probabilistic Transformer (PT), a white-box probabilistic model for contextual word representations that resembles standard Transformers in structure and performance.
  • PT is typically more sensitive to hyperparameter selection than standard Transformers, but the authors use Maximal Update Parametrization (muP) to enable efficient hyperparameter transfer from small to large models.
  • Using muP-based parameter rescaling, the approach scales PT up to about 0.4B parameters without requiring additional tuning.
  • Experiments on Masked Language Modeling (MLM) show that PT outperforms standard Transformers when compared under the same parameter budget.
  • The authors position this as a step toward more practical deployment of probabilistic models at larger scales.

Abstract

Probabilistic Transformer (PT), a white-box probabilistic model for contextual word representation, has demonstrated substantial similarity to standard Transformers in both computational structure and downstream task performance on small models and small to medium sized datasets. However, PT is less robust to hyperparameter choices than standard Transformers, making it harder to scale efficiently. In this work, we follow Maximal Update Parametrization (muP) to rescale PT's parameters, so that hyperparameters optimized on small models can be transferred to larger models without additional tuning. With this approach, we successfully scale PT to models with up to 0.4B parameters. Experiments show that PT consistently outperforms standard transformer under the same parameter budget on Masked Language Modeling (MLM) tasks. We hope this work will contribute to the practical deployment of probabilistic models at substantially larger scales in the future.