Arabic Morphosyntactic Tagging and Dependency Parsing with Large Language Models
arXiv cs.CL / 3/18/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The authors evaluate instruction-tuned large language models on morphosyntactic tagging and labeled dependency parsing for Standard Arabic to probe how well LLMs can produce explicit linguistic structure.
- They compare zero-shot prompting against retrieval-based in-context learning (ICL) using Arabic treebanks, finding that prompt design and demonstration choice strongly influence results.
- Proprietary models approach supervised baselines for feature-level tagging and become competitive with specialized dependency parsers under the right prompting and ICL setups.
- In raw-text settings, tokenization remains challenging, but retrieval-based ICL improves both parsing and tokenization performance.
- The work highlights which aspects of Arabic morphosyntax and syntax LLMs capture reliably and which remain difficult, guiding future research directions.
Related Articles

Hey dev.to community – sharing my journey with Prompt Builder, Insta Posts, and practical SEO
Dev.to

How to Build Passive Income with AI in 2026: A Developer's Practical Guide
Dev.to

The Research That Doesn't Exist
Dev.to

Jeff Bezos reportedly wants $100 billion to buy and transform old manufacturing firms with AI
TechCrunch

Krish Naik: AI Learning Path For 2026- Data Science, Generative and Agentic AI Roadmap
Dev.to