Measuring the Machine: Evaluating Generative AI as Pluralist Sociotechical Systems
arXiv cs.AI / 4/23/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The thesis argues that generative AI benchmarks do more than measure model performance—they help shape what is considered “good” by enacting particular values and meanings through sociotechnical processes.
- It critiques two common evaluation styles—functional and prescriptive approaches—for obscuring how meaning and values are produced in real-world pluralist contexts.
- It proposes a descriptive framework called Machine-Society-Human (MaSH) Loops to evaluate generative AI as a pluralist sociotechnical system by tracing recursive co-construction among models, users, and institutions.
- Methodologically, it introduces the World Values Benchmark, using distributional evaluation grounded in World Values Survey data with structured prompt sets and anchor-aware scoring.
- Empirical case studies include analyzing value drift in early GPT-3 and applying sociotechnical evaluation to real estate, concluding that benchmarking is a governance function rather than a neutral observation.
Related Articles
I’m working on an AGI and human council system that could make the world better and keep checks and balances in place to prevent catastrophes. It could change the world. Really. Im trying to get ahead of the game before an AGI is developed by someone who only has their best interest in mind.
Reddit r/artificial
Deepseek V4 Flash and Non-Flash Out on HuggingFace
Reddit r/LocalLLaMA

DeepSeek V4 Flash & Pro Now out on API
Reddit r/LocalLLaMA

I’m building a post-SaaS app catalog on Base, and here’s what that actually means
Dev.to

From "Hello World" to "Hello Agents": The Developer Keynote That Rewired Software Engineering
Dev.to