Learning Constituent Headedness
arXiv cs.CL / 3/17/2026
📰 NewsModels & Research
Key Points
- The paper treats constituent headedness as an explicit representational layer and learns it as a supervised prediction task over aligned constituency and dependency annotations.
- It induces supervision by defining each constituent head as the dependency span head, avoiding traditional percolation rules.
- On aligned English and Chinese data, the models achieve near-ceiling intrinsic accuracy and substantially outperform Collins-style rule-based percolation.
- Predicted heads yield comparable parsing accuracy under head-driven binarization and enable more faithful constituency-to-dependency conversion with cross-resource and cross-language transfer via simple label-mapping interfaces.
Related Articles

報告:LLMにおける「自己言及的再帰」と「ステートフル・エミュレーション」の観測
note

諸葛亮 孔明老師(ChatGPTのロールプレイ)との対話 その肆拾伍『銀河文明・ダークマターエンジン』
note

GPT-5.4 mini/nano登場!―2倍高速で無料プランも使える小型高性能モデル
note
Why a Perfect-Memory AI Agent Without Persona Drift is Architecturally Impossible
Dev.to
Learning to Reason with Curriculum I: Provable Benefits of Autocurriculum
arXiv cs.LG