AI Navigate

Is Evaluation Awareness Just Format Sensitivity? Limitations of Probe-Based Evidence under Controlled Prompt Structure

arXiv cs.AI / 3/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper tests whether probe-based signals of evaluation awareness in large language models are confounded by prompt/benchmark structure by using a controlled 2x2 dataset and diagnostic rewrites.
  • It finds that probes primarily track benchmark-canonical structure and do not generalize to free-form prompts independent of linguistic style.
  • Consequently, standard probe-based methodologies do not reliably disentangle evaluation context from surface artifacts, limiting the evidential strength of existing results.
  • The work implies a need for more robust evaluation methods that separate context from prompt structure when assessing evaluation awareness in LLMs.

Abstract

Prior work uses linear probes on benchmark prompts as evidence of evaluation awareness in large language models. Because evaluation context is typically entangled with benchmark format and genre, it is unclear whether probe-based signals reflect context or surface structure. We test whether these signals persist under partial control of prompt format using a controlled 2x2 dataset and diagnostic rewrites. We find that probes primarily track benchmark-canonical structure and fail to generalize to free-form prompts independent of linguistic style. Thus, standard probe-based methodologies do not reliably disentangle evaluation context from structural artifacts, limiting the evidential strength of existing results.