CoreFlow: Low-Rank Matrix Generative Models

arXiv cs.LG / 4/29/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • CoreFlow is a geometry-preserving low-rank generative modeling approach for learning distributions over matrix-valued data from high-dimensional, incomplete, or limited samples.
  • The method learns shared row and column subspaces across the matrix distribution, reducing the problem to a continuous normalizing flow trained only on a low-dimensional “core.”
  • By separating shared matrix geometry from sample-specific variation, CoreFlow substantially improves training efficiency and helps preserve matrix structure.
  • It extends to incomplete matrices using masked Riemannian updates and iterative completion, enabling robust learning despite missing entries.
  • Benchmarks on real and synthetic data show improved spectral and moment-level generation quality in few-sample regimes, while staying competitive even with heavy compression (to 9% of dimensions) and significant missing data (up to 40%).

Abstract

Learning matrix-valued distributions from high-dimensional and possibly incomplete training data is challenging: ambient-space generative modeling is computationally expensive and statistically fragile when the matrix dimension is large but the sample size is limited. We propose CoreFlow, a geometry-preserving low-rank flow model that learns shared row/column subspaces across the matrix distribution, and then trains a continuous normalizing flow only on the induced low-dimensional core. CoreFlow is designed for settings where shared low-rank matrix geometry is present, especially in high-dimensional limited-sample regimes. This separates shared matrix geometry from sample-specific variation, preserves matrix structure, and substantially improves training efficiency. The same framework also handles incomplete training matrices through masked Riemannian updates and iterative completion. Across real and synthetic benchmarks, CoreFlow substantially improves spectral and moment-level generation quality in few-sample regimes while remaining competitive in data-rich settings, even under compression to 9% of the ambient dimension and with up to 40% missing training entries.