Joint Embedding Variational Bayes
arXiv stat.ML / 3/31/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces Variational Joint Embedding (VJE), a reconstruction-free, non-contrastive self-supervised learning framework that uses a latent-variable variational formulation in representation space.
- VJE maximizes a symmetric conditional ELBO by defining a likelihood directly on target embeddings, avoiding pointwise compatibility objectives and enabling probabilistic semantics in learned representations.
- The conditional likelihood is modeled with a heavy-tailed Student-t distribution over a polar representation of target embeddings, using directional–radial decomposition to separate angular alignment from magnitude consistency and reduce norm-related issues.
- An amortized inference network produces a diagonal Gaussian posterior with uncertainty that is tied to the directional likelihood’s feature-wise variances, yielding anisotropic uncertainty without additional projection heads.
- Experiments on ImageNet-1K, CIFAR-10/100, and STL-10 show VJE is competitive on linear and k-NN evaluations and improves out-of-distribution detection using representation-space likelihoods.
Related Articles
[D] How does distributed proof of work computing handle the coordination needs of neural network training?
Reddit r/MachineLearning

BYOK is not just a pricing model: why it changes AI product trust
Dev.to

AI Citation Registries and Identity Persistence Across Records
Dev.to

Building Real-Time AI Voice Agents with Google Gemini 3.1 Flash Live and VideoSDK
Dev.to

Your Knowledge, Your Model: A Method for Deterministic Knowledge Externalization
Dev.to