Computational Hermeneutics: Evaluating generative AI as a cultural technology

arXiv cs.AI / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that generative AI should be evaluated as a cultural technology, because “culture” is not merely an external variable but part of how these systems operate.
  • It proposes that GenAI systems act as “context machines” that must handle three interpretive challenges: situatedness, plurality, and ambiguity.
  • It introduces “computational hermeneutics” as an emerging evaluation framework to explain what GenAI systems do and how they could do it better.
  • The authors lay out three evaluation principles: use iterative benchmarks, include people alongside machines, and evaluate cultural context rather than only model outputs.
  • The work suggests a shift in evaluation philosophy from standardized accuracy questions to context- and meaning-centered assessments.

Abstract

Generative AI systems are increasingly recognized as cultural technologies, yet current evaluation frameworks often treat culture as a variable to be measured rather than fundamental to the system's operation. Drawing on hermeneutic theory from the humanities, we argue that GenAI systems function as "context machines" that must inherently address three interpretive challenges: situatedness (meaning only emerges in context), plurality (multiple valid interpretations coexist), and ambiguity (interpretations naturally conflict). We present computational hermeneutics as an emerging framework offering an interpretive account of what GenAI systems do, and how they might do it better. We offer three principles for hermeneutic evaluation -- that benchmarks should be iterative, not one-off; include people, not just machines; and measure cultural context, not just model output. This perspective offers a nascent paradigm for designing and evaluating contemporary AI systems: shifting from standardized questions about accuracy to contextual ones about meaning.