How Independent are Large Language Models? A Statistical Framework for Auditing Behavioral Entanglement and Reweighting Verifier Ensembles

arXiv cs.CL / 4/10/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that LLMs often exhibit hidden behavioral dependencies (“behavioral entanglement”) due to shared pretraining data, distillation, and alignment pipelines, challenging the assumption of independence in multi-model systems like LLM-as-a-judge and ensemble verification.
  • It proposes a black-box auditing framework using a multi-resolution hierarchy and two information-theoretic metrics: a Difficulty-Weighted Behavioral Entanglement Index (focused on synchronized failures on easier tasks) and a Cumulative Information Gain (CIG) metric (capturing directional alignment in erroneous outputs).
  • Experiments across 18 LLMs from six model families show widespread entanglement and demonstrate that CIG correlates with reduced judge precision, indicating that stronger dependency leads to greater over-endorsement bias.
  • The study introduces a practical de-entangling technique for verifier ensemble reweighting, where inferred independence is used to adjust model contributions and reduce correlated bias.
  • In the reported use case, the de-entangled reweighting approach improves verification performance by up to 4.5% accuracy compared with majority voting.

Abstract

The rapid growth of the large language model (LLM) ecosystem raises a critical question: are seemingly diverse models truly independent? Shared pretraining data, distillation, and alignment pipelines can induce hidden behavioral dependencies, latent entanglement, that undermine multi-model systems such as LLM-as-a-judge pipelines and ensemble verification, which implicitly assume independent signals. In practice, this manifests as correlated reasoning patterns and synchronized failures, where apparent agreement reflects shared error modes rather than independent validation. To address this, we develop a statistical framework for auditing behavioral entanglement among black-box LLMs. Our approach introduces a multi-resolution hierarchy that characterizes the joint failure manifold through two information-theoretic metrics: (i) a Difficulty-Weighted Behavioral Entanglement Index, which amplifies synchronized failures on easy tasks, and (ii) a Cumulative Information Gain (CIG) metric, which captures directional alignment in erroneous responses. Through extensive experiments on 18 LLMs from six model families, we identify widespread behavioral entanglement and analyze its impact on LLM-as-a-judge evaluation. We find that CIG exhibits a statistically significant association with degradation in judge precision, with Spearman coefficient of 0.64 (p < 0.001) for GPT-4o-mini and 0.71 (p < 0.01) for Llama3-based judges, indicating that stronger dependency corresponds to increased over-endorsement bias. Finally, we demonstrate a practical use case of entanglement through de-entangled verifier ensemble reweighting. By adjusting model contributions based on inferred independence, the proposed method mitigates correlated bias and improves verification performance, achieving up to a 4.5% accuracy gain over majority voting.