A tree interpretation of arc standard dependency derivation

arXiv cs.CL / 3/31/2026

📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper proves that arc-standard shift/leftarc/rightarc transition sequences for projective dependency trees correspond to a unique ordered tree representation with surface-contiguous yields and stable lexical anchoring.
  • It shows the hierarchical ordered representation uniquely determines the original dependency arcs, establishing a derivational (transition-to-structure) interpretation rather than a convertive (graph-to-phrase structure) one.
  • The approach also characterizes projectivity, stating that a single-headed dependency tree has such a contiguous ordered representation if and only if it is projective.
  • For non-projective dependency inputs, the method can be applied in practice using pseudo-projective lifting prior to derivation and inverse decoding afterward to recover the original structure.
  • A proof-of-concept implementation in a neural transition-based parser demonstrates that the mapped derivations are executable and can support stable dependency recovery.

Abstract

We show that arc-standard derivations for projective dependency trees determine a unique ordered tree representation with surface-contiguous yields and stable lexical anchoring. Each \textsc{shift}, \textsc{leftarc}, and \textsc{rightarc} transition corresponds to a deterministic tree update, and the resulting hierarchical object uniquely determines the original dependency arcs. We further show that this representation characterizes projectivity: a single-headed dependency tree admits such a contiguous ordered representation if and only if it is projective. The proposal is derivational rather than convertive. It interprets arc-standard transition sequences directly as ordered tree construction, rather than transforming a completed dependency graph into a phrase-structure output. For non-projective inputs, the same interpretation can be used in practice via pseudo-projective lifting before derivation and inverse decoding after recovery. A proof-of-concept implementation in a standard neural transition-based parser shows that the mapped derivations are executable and support stable dependency recovery.