SiDiaC-v.2.0: Sinhala Diachronic Corpus Version 2.0
arXiv cs.CL / 3/12/2026
📰 NewsModels & Research
Key Points
- SiDiaC-v.2.0 is the largest Sinhala diachronic corpus to date, covering 1800–1955 publication dates and 5th–20th century written dates.
- It contains 244k words across 185 literary works with thorough filtering, preprocessing, and copyright compliance checks, and a subset of 59 documents totaling 70k words annotated by their written dates.
- Texts were digitised using Google Document AI OCR and post-processed to fix formatting, address code-mixing, include special tokens, and repair malformed tokens, with syntactic annotation and text normalisation strategies informed by FarPaHC, SiDiaC-v.1.0, and CCOHA.
- The corpus uses two-layer genre categorization (primary: Non-Fiction vs Fiction; secondary: Religious, History, Poetry, Language, and Medical) to support Sinhala NLP and build on prior work despite limited resources.
Related Articles

Math needs thinking time, everyday knowledge needs memory, and a new Transformer architecture aims to deliver both
THE DECODER
Kreuzberg v4.5.0: We loved Docling's model so much that we gave it a faster engine
Reddit r/LocalLLaMA
Today, what hardware to get for running large-ish local models like qwen 120b ?
Reddit r/LocalLLaMA
Running mistral locally for meeting notes and it's honestly good enough for my use case
Reddit r/LocalLLaMA
[D] Single-artist longitudinal fine art dataset spanning 5 decades now on Hugging Face — potential applications in style evolution, figure representation, and ethical training data
Reddit r/MachineLearning