Silicon Bureaucracy and AI Test-Oriented Education: Contamination Sensitivity and Score Confidence in LLM Benchmarks

arXiv cs.AI / 3/24/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that LLM public benchmarks often function like “silicon bureaucracy,” relying on the fragile assumption that benchmark scores faithfully measure generalization rather than test-taking competence.
  • It proposes an audit framework to assess contamination sensitivity and score confidence by applying systematic deletion, rewriting, and perturbations to benchmark items before evaluation.
  • Using a router-worker experimental setup with clean-control vs noisy conditions, the authors find that models can achieve heterogeneous above-baseline gains under noisy benchmark conditions.
  • The observed gains suggest benchmark-related cues may be reassembled, potentially reactivating contamination-related memory, implying that similar scores can reflect very different confidence levels.
  • The paper concludes that benchmarks need not be abandoned, but should be supplemented with explicit contamination and confidence audits to improve evaluation reliability.

Abstract

Public benchmarks increasingly govern how large language models (LLMs) are ranked, selected, and deployed. We frame this benchmark-centered regime as Silicon Bureaucracy and AI Test-Oriented Education, and argue that it rests on a fragile assumption: that benchmark scores directly reflect genuine generalization. In practice, however, such scores may conflate exam-oriented competence with principled capability, especially when contamination and semantic leakage are difficult to exclude from modern training pipelines. We therefore propose an audit framework for analyzing contamination sensitivity and score confidence in LLM benchmarks. Using a router-worker setup, we compare a clean-control condition with noisy conditions in which benchmark problems are systematically deleted, rewritten, and perturbed before being passed downstream. For a genuinely clean benchmark, noisy conditions should not consistently outperform the clean-control baseline. Yet across multiple models, we find widespread but heterogeneous above-baseline gains under noisy conditions, indicating that benchmark-related cues may be reassembled and can reactivate contamination-related memory. These results suggest that similar benchmark scores may carry substantially different levels of confidence. Rather than rejecting benchmarks altogether, we argue that benchmark-based evaluation should be supplemented with explicit audits of contamination sensitivity and score confidence.