Disclosure: first author.
The paper was just published in TMLR, and I figured it might be of interest to some people here. It is fairly dense mathematically, but straightforward conceptually: to add operational variational semantics to joint-embedding architectures for non-contrastive representation learning, we make three coupled choices:
- Factorize embedding likelihood: the likelihood is split into directional and radial terms, so angular alignment and representation norm are modelled separately. The radial/norm term does not drive accuracy on its own, but the factorization avoids the norm-direction coupling that otherwise produces pathological solutions.
- Anchor posterior/likelihood uncertainty: the posterior variance is tied to the likelihood scale, so uncertainty directly governs both inference and the embedding likelihood.
- Use heavy-tailed likelihood: the likelihood uses a Student-t form rather than Gaussian. This matters empirically, since as the likelihood approaches the Gaussian limit, training becomes unstable and the model fails catastrophically.
These allow the model to learn anisotropic / feature-wise uncertainty, which is evaluated in a downstream OOD detection experiments, including against VI-SimSiam.
arXiv | OpenReview | Code
[link] [comments]



