Arabic Morphosyntactic Tagging and Dependency Parsing with Large Language Models
arXiv cs.CL / 3/18/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The authors evaluate instruction-tuned large language models on morphosyntactic tagging and labeled dependency parsing for Standard Arabic to probe how well LLMs can produce explicit linguistic structure.
- They compare zero-shot prompting against retrieval-based in-context learning (ICL) using Arabic treebanks, finding that prompt design and demonstration choice strongly influence results.
- Proprietary models approach supervised baselines for feature-level tagging and become competitive with specialized dependency parsers under the right prompting and ICL setups.
- In raw-text settings, tokenization remains challenging, but retrieval-based ICL improves both parsing and tokenization performance.
- The work highlights which aspects of Arabic morphosyntax and syntax LLMs capture reliably and which remain difficult, guiding future research directions.
Related Articles
Automating the Chase: AI for Festival Vendor Compliance
Dev.to
MCP Skills vs MCP Tools: The Right Way to Configure Your Server
Dev.to
500 AI Prompts Every Content Creator Needs in 2026 (20 Free Samples)
Dev.to
Building a Game for My Daughter with AI — Part 1: What If She Could Build It Too?
Dev.to

Math needs thinking time, everyday knowledge needs memory, and a new Transformer architecture aims to deliver both
THE DECODER