Aitchison Embeddings for Learning Compositional Graph Representations
arXiv cs.LG / 5/4/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces a compositional graph embedding framework that models nodes as mixtures over latent archetypal factors rather than using hard-to-interpret embeddings.
- It leverages Aitchison geometry (for comparing probability-like mixtures) and uses isometric log-ratio (ILR) coordinates so embeddings preserve Aitchison distances while allowing unconstrained optimization in Euclidean space.
- The resulting embeddings are intrinsically interpretable because their geometry directly reflects trade-offs among archetypes, avoiding the need for post-hoc explanation methods.
- Experiments on node classification and link prediction show competitive performance against strong baselines while also enabling coherent behavior under component restriction.
- The method uses subcompositional coherence to support principled removal and renormalization of components, including dimensionality-removal analyses to study how archetype groups affect representations and predictions.
Related Articles
AnnouncementsBuilding a new enterprise AI services company with Blackstone, Hellman & Friedman, and Goldman Sachs
Anthropic News

Dara Khosrowshahi on replacing Uber drivers — and himself — with AI
The Verge

CLMA Frame Test
Dev.to

You Are Right — You Don't Need CLAUDE.md
Dev.to

Governance and Liability in AI Agents: What I Built Trying to Answer Those Questions
Dev.to