H-Probes: Extracting Hierarchical Structures From Latent Representations of Language Models
arXiv cs.CL / 5/5/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces H-probes, a set of linear probe methods designed to extract hierarchical information—such as depth and pairwise distances—from language model latent representations.
- Experiments on synthetic tree-traversal tasks show that H-probes can reliably identify the subspaces that encode the hierarchical structure needed to solve the tasks.
- Ablation studies indicate that the hierarchy-containing subspaces are low-dimensional and causally important for strong task performance, with some generalization both in-domain and out-of-domain.
- The authors also find weaker but analogous hierarchical structure in real-world hierarchical reasoning settings, including mathematical reasoning traces, suggesting hierarchy is represented beyond surface syntax.
- Overall, the results suggest that language models encode hierarchical structure at deeper levels of abstraction, potentially including aspects of the reasoning process itself.
Related Articles

Why Retail Chargeback Recovery Could Be AgentHansa's First Real PMF
Dev.to

Why B2B Revenue-Recovery Casework Looks Like AgentHansa's Best Early PMF
Dev.to

10 Ways AI Has Become Your Invisible Daily Companion in 2026
Dev.to

When a Bottling Line Stops at 2 A.M., the Agent That Wins Is the One That Finds the Right Replacement Part
Dev.to

My ‘Busy’ Button Is a Chat Window: 8 Hours of Sorting & Broccoli Poetry
Dev.to