Estimating Tail Risks in Language Model Output Distributions

arXiv cs.AI / 4/27/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper highlights that large-scale language model usage makes rare “tail” behaviors more likely to occur in aggregate, even if alignment reduces overall harmfulness risk.
  • It proposes an importance-sampling-based method that estimates the probability of harmful outputs for any given query without brute-force sampling.
  • The approach generates “unsafe” variants of the target model to increase the probability of harmful outputs, enabling more sample-efficient tail-risk estimation.
  • Experiments on misuse and misalignment benchmarks show estimates that match brute-force Monte Carlo results while using 10–20× fewer samples, including estimating harmful output probabilities around 10^-4 with roughly 500 samples.
  • The authors report that their harmfulness estimates can also expose model sensitivity to input perturbations and help predict deployment risks.

Abstract

Language models are increasingly capable and are being rapidly deployed on a population-level scale. As a result, the safety of these models is increasingly high-stakes. Fortunately, advances in alignment have significantly reduced the likelihood of harmful model outputs. However, when models are queried billions of times in a day, even rare worst-case behaviors will occur. Current safety evaluations focus on capturing the distribution of inputs that yield harmful outputs. These evaluations disregard the probabilistic nature of models and their tail output behavior. To measure this tail risk, we propose a method to efficiently estimate the probability of harmful outputs for any input query. Instead of naive brute-force sampling from the target model, where harmful outputs could be rare, we operationalize importance sampling by creating unsafe versions of the target model. These unsafe versions enable sample-efficient estimation by making harmful outputs more probable. On benchmarks measuring misuse and misalignment, these estimates match brute-force Monte Carlo estimates using 10-20x fewer samples. For example, we can estimate probability of harmful outputs on the order of 10^-4 with just 500 samples. Additionally, we find that these harmfulness estimates can reveal the sensitivity of models to perturbations in model input and predict deployment risks. Our work demonstrates that accurate rare-event estimation is both critical and feasible for safety evaluations. Code is available at https://github.com/rangell/LMTailRisk