AI Appeals Processor: A Deep Learning Approach to Automated Classification of Citizen Appeals in Government Services

arXiv cs.CL / 4/7/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that government agencies are bottlenecked by manual processing of citizen appeals, which averages 20 minutes per appeal and only achieves 67% classification accuracy.
  • It introduces “AI Appeals Processor,” a microservice that uses natural language processing and deep learning to automatically classify and route appeals submitted electronically.
  • The study benchmarks several text classification pipelines (Bag-of-Words+SVM, TF-IDF+SVM, fastText, Word2Vec+LSTM, and BERT) on 10,000 real appeals across three categories (complaints, applications, proposals) and seven thematic domains.
  • Results show Word2Vec+LSTM reaching 78% accuracy and cutting processing time by 54%, outperforming transformer-based options on the paper’s stated accuracy/efficiency trade-off.
  • The work suggests a practical path to scaling public service handling by integrating automated NLP classification into existing government service workflows via a microservice architecture.

Abstract

Government agencies worldwide face growing volumes of citizen appeals, with electronic submissions increasing significantly over recent years. Traditional manual processing averages 20 minutes per appeal with only 67% classification accuracy, creating significant bottlenecks in public service delivery. This paper presents AI Appeals Processor, a microservice-based system that integrates natural language processing and deep learning techniques for automated classification and routing of citizen appeals. We evaluate multiple approaches -- including Bag-of-Words with SVM, TF-IDF with SVM, fastText, Word2Vec with LSTM, and BERT -- on a representative dataset of 10,000 real citizen appeals across three primary categories (complaints, applications, and proposals) and seven thematic domains. Our experiments demonstrate that a Word2Vec+LSTM architecture achieves 78% classification accuracy while reducing processing time by 54%, offering an optimal balance between accuracy and computational efficiency compared to transformer-based models.