Language Models Struggle to Use Representations Learned In-Context

arXiv cs.CL / 5/4/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies whether large language models can take representations learned from in-context examples and reliably use them for downstream tasks after deployment.
  • Experiments show that open-weights LLMs struggle to deploy in-context-defined representations with novel semantics, even when those semantics appear to be captured in latent space.
  • The authors evaluate a new benchmark called “adaptive world modeling” and find that closed-source state-of-the-art reasoning models also fail to leverage novel in-context patterns consistently.
  • Overall, the work suggests that current LLMs can form in-context representations but lack the ability to flexibly apply them, motivating new methods to improve representation use and transfer.
  • The findings highlight a gap between in-context representation learning and the broader goal of adapting behavior to radically new deployment contexts.

Abstract

Though large language models (LLMs) have enabled great success across a wide variety of tasks, they still appear to fall short of one of the loftier goals of artificial intelligence research: creating an artificial system that can adapt its behavior to radically new contexts upon deployment. One important step towards this goal is to create systems that can induce rich representations of data that are seen in-context, and then flexibly deploy these representations to accomplish goals. Recently, Park et al. (2024) demonstrated that current LLMs are indeed capable of inducing such representation from context (i.e., in-context representation learning). The present study investigates whether LLMs can use these representations to complete simple downstream tasks. We first assess whether open-weights LLMs can use in-context representations for next-token prediction, and then probe models using a novel task, adaptive world modeling. In both tasks, we find evidence that open-weights LLMs struggle to deploy representations of novel semantics that are defined in-context, even if they encode these semantics in their latent representations. Furthermore, we assess closed-source, state-of-the-art reasoning models on the adaptive world modeling task, demonstrating that even the most performant LLMs cannot reliably leverage novel patterns presented in-context. Overall, this work seeks to inspire novel methods for encouraging models to not only encode information presented in-context, but to do so in a manner that supports flexible deployment of this information.