Structured Prompting for Arabic Essay Proficiency: A Trait-Centric Evaluation Approach
arXiv cs.CL / 3/23/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces a three-tier prompting framework (standard, hybrid, rubric-guided) for trait-specific automatic essay scoring (AES) in Arabic using LLMs under zero-shot and few-shot settings.
- It addresses the scarcity of Arabic AES tools and demonstrates that structured prompting enables trait-level evaluation across organization, vocabulary, development, and style rather than relying on model size alone.
- The hybrid approach simulates multi-agent evaluation with trait specialist raters, while rubric-guided prompting uses scored exemplars to improve alignment; eight LLMs were evaluated on the QAES Arabic dataset.
- Rubric-guided prompting yields consistent gains across traits and models, with Development and Style showing the largest improvements; Fanar-1-9B-Instruct achieves the highest trait-level agreement (QWK 0.28, CI 0.41) in zero- and few-shot settings.
- This work establishes the first comprehensive framework for proficiency-oriented Arabic AES and lays the groundwork for scalable assessment in low-resource educational contexts.
Related Articles

Interactive Web Visualization of GPT-2
Reddit r/artificial
Stop Treating AI Interview Fraud Like a Proctoring Problem
Dev.to
[R] Causal self-attention as a probabilistic model over embeddings
Reddit r/MachineLearning
The 5 software development trends that actually matter in 2026 (and what they mean for your startup)
Dev.to
InVideo AI Review: Fast Finished
Dev.to