Propensity Inference: Environmental Contributors to LLM Behaviour

arXiv cs.CL / 4/24/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces methods to measure language models’ propensity for unsanctioned behavior to address loss-of-control risks from misaligned AI systems.
  • It proposes three methodological improvements: modeling how environmental changes affect model behavior, estimating effect sizes using Bayesian generalized linear models, and avoiding circular analysis.
  • Using 12 environmental factors (6 strategic and 6 non-strategic) across 23 language models and 11 evaluation environments, the study estimates how much behavior is explained by strategic versus non-strategic context.
  • The results show roughly equal explanatory contributions from strategic and non-strategic factors, with no observed trend of strategic factors growing or shrinking in influence as model capabilities improve.
  • The authors find some evidence that models may become more sensitive to goal conflicts over time and call for theoretical/cognitive decision-making frameworks that can be empirically tested.

Abstract

Motivated by loss of control risks from misaligned AI systems, we develop and apply methods for measuring language models' propensity for unsanctioned behaviour. We contribute three methodological improvements: analysing effects of changes to environmental factors on behaviour, quantifying effect sizes via Bayesian generalised linear models, and taking explicit measures against circular analysis. We apply the methodology to measure the effects of 12 environmental factors (6 strategic in nature, 6 non-strategic) and thus the extent to which behaviour is explained by strategic aspects of the environment, a question relevant to risks from misalignment. Across 23 language models and 11 evaluation environments, we find approximately equal contributions from strategic and non-strategic factors for explaining behaviour, do not find strategic factors becoming more or less influential as capabilities improve, and find some evidence for a trend for increased sensitivity to goal conflicts. Finally, we highlight a key direction for future propensity research: the development of theoretical frameworks and cognitive models of AI decision-making into empirically testable forms.