Asymptotic Theory for Graphical SLOPE: Precision Estimation and Pattern Convergence

arXiv stat.ML / 4/15/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper develops an asymptotic theory for Graphical SLOPE (GSLOPE) in precision matrix estimation, focusing on how it recovers sparse and clustered edge structures with similar strengths.
  • In a fixed-dimensional setting, it proves that the root-n scaled estimation error converges to the unique minimizer of a strictly convex optimization problem tied to the directional derivative of the SLOPE penalty.
  • It also establishes convergence of the induced SLOPE pattern, giving an asymptotic characterization of the edge-clustering structure selected by the estimator.
  • Comparisons with GLASSO show GSLOPE’s grouping property can significantly improve accuracy when the underlying precision matrix has structured edge patterns.
  • For non-Gaussian data, the authors derive limiting distributions for Gaussian-loss estimators under elliptical distributions (including heavy-tail variance inflation) and for TSLOPE under multivariate t-loss, finding TSLOPE performs better under heavy-tailed mechanisms.

Abstract

This paper studies Graphical SLOPE for precision matrix estimation, with emphasis on its ability to recover both sparsity and clusters of edges with equal or similar strength. In a fixed-dimensional regime, we establish that the root-n scaled estimation error converges to the unique minimizer of a strictly convex optimization problem defined through the directional derivative of the SLOPE penalty. We also establish convergence of the induced SLOPE pattern, thereby obtaining an asymptotic characterization of the clustering structure selected by the estimator. A comparison with GLASSO shows that the grouping property of SLOPE can substantially improve estimation accuracy when the precision matrix exhibits structured edge patterns. To assess the effect of departures from Gaussianity, we then analyze Gaussian-loss precision matrix estimation under elliptical distributions. In this setting, we derive the limiting distribution and quantify the inflation in variability induced by heavy tails relative to the Gaussian benchmark. We also study TSLOPE, based on the multivariate t-loss, and derive its limiting distribution. The results show that TSLOPE offers clear advantages over GSLOPE under heavy-tailed data-generating mechanisms. Simulation evidence suggests that these qualitative conclusions persist in high-dimensional settings, and an empirical application shows that SLOPE-based estimators, especially TSLOPE, can uncover economically meaningful clustered dependence structures.