AI Navigate

Arabic Morphosyntactic Tagging and Dependency Parsing with Large Language Models

arXiv cs.CL / 3/18/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The authors evaluate instruction-tuned large language models on morphosyntactic tagging and labeled dependency parsing for Standard Arabic to probe how well LLMs can produce explicit linguistic structure.
  • They compare zero-shot prompting against retrieval-based in-context learning (ICL) using Arabic treebanks, finding that prompt design and demonstration choice strongly influence results.
  • Proprietary models approach supervised baselines for feature-level tagging and become competitive with specialized dependency parsers under the right prompting and ICL setups.
  • In raw-text settings, tokenization remains challenging, but retrieval-based ICL improves both parsing and tokenization performance.
  • The work highlights which aspects of Arabic morphosyntax and syntax LLMs capture reliably and which remain difficult, guiding future research directions.

Abstract

Large language models (LLMs) perform strongly on many NLP tasks, but their ability to produce explicit linguistic structure remains unclear. We evaluate instruction-tuned LLMs on two structured prediction tasks for Standard Arabic: morphosyntactic tagging and labeled dependency parsing. Arabic provides a challenging testbed due to its rich morphology and orthographic ambiguity, which create strong morphology-syntax interactions. We compare zero-shot prompting with retrieval-based in-context learning (ICL) using examples from Arabic treebanks. Results show that prompt design and demonstration selection strongly affect performance: proprietary models approach supervised baselines for feature-level tagging and become competitive with specialized dependency parsers. In raw-text settings, tokenization remains challenging, though retrieval-based ICL improves both parsing and tokenization. Our analysis highlights which aspects of Arabic morphosyntax and syntax LLMs capture reliably and which remain difficult.