Pretraining and Benchmarking Modern Encoders for Latvian
arXiv cs.CL / 3/17/2026
📰 NewsModels & Research
Key Points
- The authors pretrain a suite of Latvian-specific encoders based on RoBERTa, DeBERTaV3, and ModernBERT, including long-context variants, to address data scarcity for Latvian.
- They evaluate these models on a diverse set of Latvian diagnostic and linguistic benchmarks and report competitive performance against existing monolingual and multilingual encoders.
- Their best model, lv-deberta-base (111M parameters), achieves the strongest overall performance, outperforming larger multilingual baselines as well as prior Latvian encoders.
- All pretrained models and evaluation resources are released to support further research and practical applications in Latvian NLP.
Related Articles

Jeff Bezos reportedly wants $100 billion to buy and transform old manufacturing firms with AI
TechCrunch
[R] Weekly digest: arXiv AI security papers translated for practitioners -- Cascade (cross-stack CVE+Rowhammer attacks on compound AI), LAMLAD (dual-LLM adversarial ML, 97% evasion), OpenClaw (4 vuln classes in agent frameworks)
Reddit r/MachineLearning
My Experience with Qwen 3.5 35B
Reddit r/LocalLLaMA

Cursor’s new coding model Composer 2 is here: It beats Claude Opus 4.6 but still trails GPT-5.4
VentureBeat
Qwen 3.5 122B completely falls apart at ~ 100K context
Reddit r/LocalLLaMA