Top-down string-to-dependency Neural Machine Translation
arXiv cs.CL / 3/31/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper addresses weaknesses of attention-based encoder-decoder neural machine translation models when translating long inputs that are rare or absent in training data.
- It proposes a syntactic decoder that outputs the target-language dependency tree using a top-down, left-to-right generation order rather than standard sequence-to-sequence decoding.
- Experimental results indicate the top-down string-to-dependency-tree approach generalizes better for long, out-of-training-distribution inputs.
- The core idea is to incorporate target-side syntax explicitly to mitigate length and coverage gaps in NMT training.
Related Articles
[D] How does distributed proof of work computing handle the coordination needs of neural network training?
Reddit r/MachineLearning

BYOK is not just a pricing model: why it changes AI product trust
Dev.to

AI Citation Registries and Identity Persistence Across Records
Dev.to

Building Real-Time AI Voice Agents with Google Gemini 3.1 Flash Live and VideoSDK
Dev.to

Your Knowledge, Your Model: A Method for Deterministic Knowledge Externalization
Dev.to