Contraction and Hourglass Persistence for Learning on Graphs, Simplices, and Cells
arXiv stat.ML / 4/21/2026
💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research
Key Points
- The paper studies how persistent homology (PH) is incorporated into graph neural networks (GNNs) via increasing subgraph inclusions, highlighting limitations of this common “inclusion-only” procedure.
- It proposes analyzing contractions as a principled topological operation and introduces Contraction Homology (CH) to study the persistence of contraction sequences.
- The authors show that forward PH and CH have different expressivity, and they introduce Hourglass Persistence, which alternates inclusions and contractions to improve expressivity, learnability, and stability.
- They extend the framework beyond graphs to simplicial and cellular networks, and provide efficient, pluggable algorithms for end-to-end differentiable GNN pipelines with consistent empirical gains over multiple PH baselines.
Related Articles

A practical guide to getting comfortable with AI coding tools
Dev.to

Every time a new model comes out, the old one is obsolete of course
Reddit r/LocalLLaMA

We built it during the NVIDIA DGX Spark Full-Stack AI Hackathon — and it ended up winning 1st place overall 🏆
Dev.to

Stop Losing Progress: Setting Up a Pro Jupyter Workflow in VS Code (No More Colab Timeouts!)
Dev.to

🚀 Major BrowserAct CLI Update
Dev.to