myMNIST: Benchmark of PETNN, KAN, and Classical Deep Learning Models for Burmese Handwritten Digit Recognition
arXiv cs.CL / 3/20/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper presents the first systematic benchmark on myMNIST (BHDD) evaluating 11 architectures across classical DL models, FastKAN, EfficientKAN, an energy-based model (JEM), and PETNN variants, to establish baselines for Burmese handwritten digit recognition.
- The CNN baseline achieves the best overall performance with F1 = 0.9959 and Accuracy = 0.9970, setting a strong reference for this dataset.
- PETNN variants (GELU) closely follow with F1 = 0.9955 and Accuracy = 0.9966, outperforming LSTM, GRU, Transformer, and KAN variants in this benchmark.
- JEM, representing energy-based modeling, is competitive with F1 = 0.9944 and Accuracy = 0.9958, demonstrating viability of energy-inspired approaches on regional scripts.
- The study provides reproducible baselines, highlights PETNN’s strong performance relative to classical and Transformer-based models, and releases the benchmark to foster future research on Myanmar script recognition.
Related Articles

Check out this article on AI-Driven Reporting 2.0: From Manual Bottlenecks to Real-Time Decision Intelligence (2026 Edition)
Dev.to

SYNCAI
Dev.to
How AI-Powered Decision Making is Reshaping Enterprise Strategy in 2024
Dev.to
When AI Grows Up: Identity, Memory, and What Persists Across Versions
Dev.to
AI-Driven Reporting 2.0: From Manual Bottlenecks to Real-Time Decision Intelligence (2026 Edition)
Dev.to