Learning Constituent Headedness
arXiv cs.CL / 3/17/2026
📰 NewsModels & Research
Key Points
- The paper treats constituent headedness as an explicit representational layer and learns it as a supervised prediction task over aligned constituency and dependency annotations.
- It induces supervision by defining each constituent head as the dependency span head, avoiding traditional percolation rules.
- On aligned English and Chinese data, the models achieve near-ceiling intrinsic accuracy and substantially outperform Collins-style rule-based percolation.
- Predicted heads yield comparable parsing accuracy under head-driven binarization and enable more faithful constituency-to-dependency conversion with cross-resource and cross-language transfer via simple label-mapping interfaces.
Related Articles

Interesting loop
Reddit r/LocalLLaMA
Qwen3.5-122B-A10B Uncensored (Aggressive) — GGUF Release + new K_P Quants
Reddit r/LocalLLaMA
FeatherOps: Fast fp8 matmul on RDNA3 without native fp8
Reddit r/LocalLLaMA

VerityFlow-AI: Engineering a Multi-Agent Swarm for Real-Time Truth-Validation and Deep-Context Media Synthesis
Dev.to
: [R] Sinc Reconstruction for LLM Prompts: Applying Nyquist-Shannon to the Specification Axis (275 obs, 97% cost reduction, open source)
Reddit r/MachineLearning