In-context Learning vs. Instruction Tuning: The Case of Small and Multilingual Language Models

arXiv cs.CL / 5/1/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper contrasts instruction tuning (supervised fine-tuning on curated instruction datasets, sometimes with human-preference alignment) with in-context learning (ICL) as an alternative way to teach instruction following to base LLMs.
  • It evaluates whether ICL can reliably produce instruction-following behavior for small and multilingual language models, where instruction tuning is often more resource-intensive.
  • The authors find that ICL instruction-following performance degrades in non-English and cross-model-size scenarios.
  • They show that applying Direct Preference Optimization (DPO) on base models can partially improve results, but additional approaches are still needed to match the strongest English-centric large models.
  • Overall, the work suggests ICL alone is not sufficient for robust multilingual instruction following at smaller scales, highlighting a remaining gap for future methods.

Abstract

Instruction following is a critical ability for Large Language Models to perform downstream tasks. The standard approach to instruction tuning has relied on a specific phase of supervised fine-tuning over curated instruction datasets, optionally complemented with an alignment step over human preferences. Recent work has shown the potential of in-context learning (ICL) alternatives to guide base models towards instruction following. This type of approach is particularly relevant to circumvent the notable efforts and resources needed for supervised instruction tuning. In this work, we evaluate the viability of ICL for instruction following in scenarios where it is particularly relevant, i.e., languages other than English and across model sizes. Our results show that these scenarios result in downgraded ICL instruction following performance. We further show that applying Direct Preference Optimisation over base models can partially improve baseline results, although alternatives to current ICL instruction following will be needed to bridge the gap with larger English-centric language models.