AI Navigate

Normative Common Ground Replication (NormCoRe): Replication-by-Translation for Studying Norms in Multi-agent AI

arXiv cs.AI / 3/13/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • NormCoRe introduces Normative Common Ground Replication, a framework to systematically translate human subject experimental designs into multi-agent AI (MAAI) experiments to study normative coordination.
  • It combines behavioral science, replication research, and MAIAI architectures to map the structural layers of human studies onto AI agent studies, enabling rigorous documentation and analysis of norms in MAIAI.
  • The authors demonstrate the approach by replicating a distributive justice experiment under a veil of ignorance, finding that AI normative judgments can differ from human baselines and are sensitive to the foundation model and the language used to instantiate agent personas.
  • The work provides a principled pathway for analyzing norms in MAIAI and guides design choices when AI agents automate or support tasks traditionally performed by humans.

Abstract

In the late 2010s, the fashion trend NormCore framed sameness as a signal of belonging, illustrating how norms emerge through collective coordination. Today, similar forms of normative coordination can be observed in systems based on Multi-agent Artificial Intelligence (MAAI), as AI-based agents deliberate, negotiate, and converge on shared decisions in fairness-sensitive domains. Yet, existing empirical approaches often treat norms as targets for alignment or replication, implicitly assuming equivalence between human subjects and AI agents and leaving collective normative dynamics insufficiently examined. To address this gap, we propose Normative Common Ground Replication (NormCoRe), a novel methodological framework to systematically translate the design of human subject experiments into MAAI environments. Building on behavioral science, replication research, and state-of-the-art MAAI architectures, NormCoRe maps the structural layers of human subject studies onto the design of AI agent studies, enabling systematic documentation of study design and analysis of norms in MAAI. We demonstrate the utility of NormCoRe by replicating a seminal experimental study on distributive justice, in which participants negotiate fairness principles under a "veil of ignorance". We show that normative judgments in AI agent studies can differ from human baselines and are sensitive to the choice of the foundation model and the language used to instantiate agent personas. Our work provides a principled pathway for analyzing norms in MAAI and helps to guide, reflect, and document design choices whenever AI agents are used to automate or support tasks formerly carried out by humans.