An Empirical Study of Many-Shot In-Context Learning for Machine Translation of Low-Resource Languages
arXiv cs.CL / 4/6/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper presents an empirical evaluation of many-shot in-context learning (ICL) for machine translation from English into ten newly included truly low-resource languages in FLORES+.
- It finds that translation quality generally improves as the number of ICL examples increases, highlighting the benefit of longer-context prompting for low-resource settings.
- The study shows that BM25-based retrieval of more informative examples substantially improves data efficiency, with 50 retrieved examples performing similarly to about 250 many-shot examples.
- Using 250 retrieved examples yields results comparable to using roughly 1,000 many-shot examples, suggesting retrieval can reduce inference cost while maintaining effectiveness.
- The authors also analyze how factors like example retrieval quality, out-of-domain data, and ordering by length affect many-shot ICL performance.
Related Articles

Оказывается, эта нейросеть рисует бесплатно. Я узнал случайно.
Dev.to

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Three-Layer Memory Governance: Core, Provisional, Private
Dev.to

I Researched AI Prompting So You Don’t Have To
Dev.to

Top AI Tools Every Growing Business Should Use in 2026
Dev.to