Investigation into In-Context Learning Capabilities of Transformers

arXiv cs.LG / 4/29/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates how well transformer models perform in-context learning (ICL), where tasks are solved from example input-output pairs given at inference time.
  • It provides a systematic empirical study using controlled synthetic Gaussian-mixture binary classification, analyzing how in-context test accuracy scales with input dimensionality, number of in-context examples, and number of pre-training tasks.
  • The authors use a linear in-context classifier setup to isolate geometric conditions that determine when models can infer task structure from context alone.
  • The study also examines “benign overfitting,” where models memorize noisy in-context labels yet still generalize well to clean test data, and maps the parameter regions where this occurs.
  • Overall, the results yield an empirical scaling “map” that clarifies which factors (dimensionality, signal strength, and context information) make ICL succeed or fail in classification settings.

Abstract

Transformers have demonstrated a strong ability for in-context learning (ICL), enabling models to solve previously unseen tasks using only example input output pairs provided at inference time. While prior theoretical work has established conditions under which transformers can perform linear classification in-context, the empirical scaling behavior governing when this mechanism succeeds remains insufficiently characterized. In this paper, we conduct a systematic empirical study of in-context learning for Gaussian-mixture binary classification tasks. Building on the theoretical framework of Frei and Vardi (2024), we analyze how in-context test accuracy depends on three fundamental factors: the input dimension, the number of in-context examples, and the number of pre-training tasks. Using a controlled synthetic setup and a linear in-context classifier formulation, we isolate the geometric conditions under which models successfully infer task structure from context alone. We additionally investigate the emergence of benign overfitting, where models memorize noisy in-context labels while still achieving strong generalization performance on clean test data. Through extensive sweeps across dimensionality, sequence length, task diversity, and signal-to-noise regimes, we identify the parameter regions in which this phenomenon arises and characterize how it depends on data geometry and training exposure. Our results provide a comprehensive empirical map of scaling behavior in in-context classification, highlighting the critical role of dimensionality, signal strength, and contextual information in determining when in-context learning succeeds and when it fails.