myMNIST: Benchmark of PETNN, KAN, and Classical Deep Learning Models for Burmese Handwritten Digit Recognition
arXiv cs.CL / 3/20/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper presents the first systematic benchmark on myMNIST (BHDD) evaluating 11 architectures across classical DL models, FastKAN, EfficientKAN, an energy-based model (JEM), and PETNN variants, to establish baselines for Burmese handwritten digit recognition.
- The CNN baseline achieves the best overall performance with F1 = 0.9959 and Accuracy = 0.9970, setting a strong reference for this dataset.
- PETNN variants (GELU) closely follow with F1 = 0.9955 and Accuracy = 0.9966, outperforming LSTM, GRU, Transformer, and KAN variants in this benchmark.
- JEM, representing energy-based modeling, is competitive with F1 = 0.9944 and Accuracy = 0.9958, demonstrating viability of energy-inspired approaches on regional scripts.
- The study provides reproducible baselines, highlights PETNN’s strong performance relative to classical and Transformer-based models, and releases the benchmark to foster future research on Myanmar script recognition.
Related Articles
Is AI becoming a bubble, and could it end like the dot-com crash?
Reddit r/artificial

Externalizing State
Dev.to

I made a 'benchmark' where LLMs write code controlling units in a 1v1 RTS game.
Dev.to

My AI Does Not Have a Clock
Dev.to
How to settle on a coding LLM ? What parameters to watch out for ?
Reddit r/LocalLLaMA