Beyond Gaussian Bottlenecks: Topologically Aligned Encoding of Vision-Transformer Feature Spaces

arXiv cs.CV / 5/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that visual world modeling failures often stem from latent representations not preserving 3D geometry and physically consistent camera dynamics, not just from insufficient model capacity.
  • It introduces S$^2$VAE, a geometry-first latent learning framework that encodes scene 3D state (camera motion, depth, and point-level structure) rather than focusing on appearance alone.
  • S$^2$VAE uses a variational autoencoder with a product of Power Spherical latent distributions to enforce hyperspherical structure in the bottleneck, improving directional and geometric semantic preservation under heavy compression.
  • Experiments across depth estimation, camera pose recovery, and point cloud reconstruction show hyperspherical latents outperform standard Gaussian bottlenecks, especially when compression is high.
  • The authors conclude that latent geometry should be treated as a core design element for physically grounded vision and world models.

Abstract

Modern visual world modeling systems increasingly rely on high-capacity architectures and large-scale data to produce plausible motion, yet they often fail to preserve underlying 3D geometry or physically consistent camera dynamics. A key limitation lies not only in model capacity, but in the latent representations used to encode geometric structure. We propose S^2VAE, a geometry-first latent learning framework that focuses on compressing and representing the latent 3D state of a scene, including camera motion, depth, and point-level structure, rather than modeling appearance alone. Building on representations from a Visual Geometry Grounded Transformer (VGGT), we introduce a novel type of variational autoencoder using a product of Power Spherical latent distributions, explicitly enforcing hyperspherical structure in the bottleneck to preserve directional and geometric semantics under strong compression. Across depth estimation, camera pose recovery, and point cloud reconstruction, we show that geometry-aligned hyperspherical latents consistently outperform conventional Gaussian bottlenecks, particularly in high-compression regimes. Our results highlight latent geometry as a first-class design choice for physically grounded visual and world models.