Measuring the Machine: Evaluating Generative AI as Pluralist Sociotechical Systems

arXiv cs.AI / 4/23/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The thesis argues that generative AI benchmarks do more than measure model performance—they help shape what is considered “good” by enacting particular values and meanings through sociotechnical processes.
  • It critiques two common evaluation styles—functional and prescriptive approaches—for obscuring how meaning and values are produced in real-world pluralist contexts.
  • It proposes a descriptive framework called Machine-Society-Human (MaSH) Loops to evaluate generative AI as a pluralist sociotechnical system by tracing recursive co-construction among models, users, and institutions.
  • Methodologically, it introduces the World Values Benchmark, using distributional evaluation grounded in World Values Survey data with structured prompt sets and anchor-aware scoring.
  • Empirical case studies include analyzing value drift in early GPT-3 and applying sociotechnical evaluation to real estate, concluding that benchmarking is a governance function rather than a neutral observation.

Abstract

In measurement theory, instruments do not simply record reality; they help constitute what is observed. The same holds for generative AI evaluation: benchmarks do not just measure, they shape what models appear to be. Functionalist benchmarks treat models as isolated predictors, while prescriptive approaches assess what systems ought to be. Both obscure the sociotechnical processes through which meaning and values are enacted, risking the reification of narrow cultural perspectives in pluralist contexts. This thesis advances a descriptive alternative. It argues that generative AI must be evaluated as a pluralist sociotechnical system and develops Machine-Society-Human (MaSH) Loops, a framework for tracing how models, users, and institutions recursively co-construct meaning and values. Evaluation shifts from judging outputs to examining how values are enacted in interaction. Three contributions follow. Conceptually, MaSH Loops reframes evaluation as recursive, enactive process. Methodologically, the World Values Benchmark introduces a distributional approach grounded in World Values Survey data, structured prompt sets, and anchor-aware scoring. Empirically, the thesis demonstrates these through two cases: value drift in early GPT-3 and sociotechnical evaluation in real estate. A final chapter draws on participatory realism to argue that prompting and evaluation are constitutive interventions, not neutral observations. The thesis argues that static benchmarks are insufficient for generative AI. Responsible evaluation requires pluralist, process-oriented frameworks that make visible whose values are enacted. Evaluation is therefore a site of governance, shaping how AI systems are understood, deployed, and trusted.