Shared Lexical Task Representations Explain Behavioral Variability In LLMs

arXiv cs.AI / 4/27/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper examines why LLM performance varies unpredictably with prompt wording by comparing instruction-based prompting versus example-based (few-shot) prompting styles.
  • It finds that, although overall behavior can change greatly across prompts, the model uses shared underlying mechanisms for the same task.
  • The authors identify “lexical task heads,” task-specific attention heads whose outputs explicitly reflect the task and help trigger subsequent answer generation across different prompting styles.
  • The degree to which these lexical task heads are activated helps explain prompt-to-prompt behavioral variability, and some failures are attributed to competing task representations weakening the target signal.

Abstract

One of the most common complaints about large language models (LLMs) is their prompt sensitivity -- that is, the fact that their ability to perform a task or provide a correct answer to a question can depend unpredictably on the way the question is posed. We investigate this variation by comparing two very different but commonly-used styles of prompting: instruction-based prompts, which describe the task in natural language, and example-based prompts, which provide in-context few-shot demonstration pairs to illustrate the task. We find that, despite large variation in performance as a function of the prompt, the model engages some common underlying mechanisms across different prompts of a task. Specifically, we identify task-specific attention heads whose outputs literally describe the task -- which we dub lexical task heads -- and show that these heads are shared across prompting styles and trigger subsequent answer production. We further find that behavioral variation between prompts can be explained by the degree to which these heads are activated, and that failures are at least sometimes due to competing task representations that dilute the signal of the target task. Our results together present an increasingly clear picture of how LLMs' internal representations can explain behavior that otherwise seems idiosyncratic to users and developers.