AI Navigate

Learning Constituent Headedness

arXiv cs.CL / 3/17/2026

📰 NewsModels & Research

Key Points

  • The paper treats constituent headedness as an explicit representational layer and learns it as a supervised prediction task over aligned constituency and dependency annotations.
  • It induces supervision by defining each constituent head as the dependency span head, avoiding traditional percolation rules.
  • On aligned English and Chinese data, the models achieve near-ceiling intrinsic accuracy and substantially outperform Collins-style rule-based percolation.
  • Predicted heads yield comparable parsing accuracy under head-driven binarization and enable more faithful constituency-to-dependency conversion with cross-resource and cross-language transfer via simple label-mapping interfaces.

Abstract

Headedness is widely used as an organizing device in syntactic analysis, yet constituency treebanks rarely encode it explicitly and most processing pipelines recover it procedurally via percolation rules. We treat this notion of constituent headedness as an explicit representational layer and learn it as a supervised prediction task over aligned constituency and dependency annotations, inducing supervision by defining each constituent head as the dependency span head. On aligned English and Chinese data, the resulting models achieve near-ceiling intrinsic accuracy and substantially outperform Collins-style rule-based percolation. Predicted heads yield comparable parsing accuracy under head-driven binarization, consistent with the induced binary training targets being largely equivalent across head choices, while increasing the fidelity of deterministic constituency-to-dependency conversion and transferring across resources and languages under simple label-mapping interfaces.