Sparse Graph Learning from Sparse Data via Fiedler Number Maximization

arXiv cs.LG / 4/30/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes learning a sparse, connected graph from very limited (sparse) observations, even when the sample size K is much smaller than the signal dimension N and the data distribution is unknown.
  • It introduces Fiedler-number maximization (using the second eigenvalue of the graph Laplacian as a connectivity measure) as a robust regularization term in the sparse graph learning objective.
  • The authors develop a greedy algorithm that weakens/removes edges one by one using eigenvalue perturbation bounds to limit how each edge change affects the Fiedler number.
  • They also present a parallel approach using Cheeger’s inequality to recursively partition the graph and distributedly identify an edge choice intended to improve the objective.
  • Simulation results indicate that maximizing the Fiedler number makes sparse graph estimates more robust and improves performance over prior sparse graph learning methods.

Abstract

We aim to learn a sparse and connected graph from sparse data, where the number of observations K can be substantially smaller than the signal dimension N for signals x in R^N, and the underlying distribution is unknown. In this severely ill-posed setting, we incorporate Fiedler number (the second eigenvalue of the graph Laplacian matrix that quantifies connectedness) as a robust regularization term in the sparse graph learning objective. We first develop a greedy algorithm that iteratively selects one edge globally for weakening/removal to reduce the objective, leveraging eigenvalue perturbation theorems that bound the adverse effect of an edge change to the Fiedler number. Next, we design a parallel variant, based on the Cheeger's inequality, that recursively partitions an input graph into two sub-graphs using an approximate Cheeger cut to distributedly find an optimal edge. Simulation experiments show that Fiedler number maximization robustifies sparse graph estimates, outperforming previous sparse graph learning algorithms.