AI Navigate

GATech at AbjadMed: Bidirectional Encoders vs. Causal Decoders: Insights from 82-Class Arabic Medical Classification

arXiv cs.AI / 3/12/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper outlines a system for Arabic medical text classification across 82 categories, driven by a fine-tuned AraBERTv2 encoder with hybrid attention/mean pooling and multi-sample dropout for robust regularization.
  • It benchmarks this bidirectional encoder setup against multilingual and Arabic-specific encoders and against large-scale causal decoders, including Llama 3.3 70B zero-shot re-ranking and Qwen 3B hidden-state features.
  • The results indicate that specialized bidirectional encoders outperform causal decoders for fine-grained classification by better capturing global semantic context.
  • It notes that causal decoders, optimized for next-token prediction, produce sequence-biased embeddings that are less effective for categorization, especially given data imbalance and label noise.
  • Final results on the test set report metrics such as Accuracy and Macro-F1, highlighting the superiority of fine-tuned encoders for specialized Arabic NLP tasks.

Abstract

This paper presents system description for Arabic medical text classification across 82 distinct categories. Our primary architecture utilizes a fine-tuned AraBERTv2 encoder enhanced with a hybrid pooling strategies, combining attention and mean representations, and multi-sample dropout for robust regularization. We systematically benchmark this approach against a suite of multilingual and Arabic-specific encoders, as well as several large-scale causal decoders, including zero-shot re-ranking via Llama 3.3 70B and feature extraction from Qwen 3B hidden states. Our findings demonstrate that specialized bidirectional encoders significantly outperform causal decoders in capturing the precise semantic boundaries required for fine-grained medical text classification. We show that causal decoders, optimized for next-token prediction, produce sequence-biased embeddings that are less effective for categorization compared to the global context captured by bidirectional attention. Despite significant class imbalance and label noise identified within the training data, our results highlight the superior semantic compression of fine-tuned encoders for specialized Arabic NLP tasks. Final performance metrics on the test set, including Accuracy and Macro-F1, are reported and discussed.