Principles Do Not Apply Themselves: A Hermeneutic Perspective on AI Alignment

arXiv cs.AI / 4/14/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that AI alignment cannot be reduced to simply applying stated principles or preferences, because principles often do not determine their own concrete application in real cases.
  • It frames alignment as requiring an interpretive, context-sensitive judgment about how to read, apply, and prioritize principles when they conflict, are too broad, or when relevant facts are unclear.
  • It links this interpretive component to empirical observations that a substantial portion of preference-labeling data involves situations of principle conflict or indifference, where the principle set does not uniquely dictate an outcome.
  • The authors propose an operational implication: alignment-relevant behaviors may only become visible in the distribution of model responses during deployment, since interpretive judgments manifest in outputs.
  • They formalize this risk by distinguishing deployment-induced vs. corpus-induced evaluation and showing that off-policy audits can miss failures when those response distributions diverge.

Abstract

AI alignment is often framed as the task of ensuring that an AI system follows a set of stated principles or human preferences, but general principles rarely determine their own application in concrete cases. When principles conflict, when they are too broad to settle a situation, or when the relevant facts are unclear, an additional act of judgment is required. This paper analyzes that step through the lens of hermeneutics and argues that alignment therefore includes an interpretive component: it involves context-sensitive judgments about how principles should be read, applied, and prioritized in practice. We connect this claim to recent empirical findings showing that a substantial portion of preference-labeling data falls into cases of principle conflict or indifference, where the principle set does not uniquely determine a decision. We then draw an operational consequence: because such judgments are expressed in behavior, many alignment-relevant choices appear only in the distribution of responses a model generates at deployment time. To formalize this point, we distinguish deployment-induced and corpus-induced evaluation and show that off-policy audits can fail to capture alignment-relevant failures when the two response distributions differ. We argue that principle-specified alignment includes a context-dependent interpretive component.