Adversarial Moral Stress Testing of Large Language Models

arXiv cs.AI / 4/2/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that current LLM safety benchmarks (often single-round, aggregate metrics like toxicity/refusal rates) can miss rare but severe ethical failures that emerge during realistic multi-turn adversarial use.
  • It introduces Adversarial Moral Stress Testing (AMST), a framework that applies structured “stress transformations” to prompts and evaluates ethical robustness with distribution-aware metrics capturing variance, tail risk, and temporal behavioral drift across rounds.
  • AMST is evaluated on multiple state-of-the-art LLMs (including LLaMA-3-8B, GPT-4o, and DeepSeek-v3) and reveals robustness differences and progressive degradation patterns not detectable with conventional single-round testing.
  • The findings suggest robustness depends more on distributional stability and tail behavior than on average performance, emphasizing the need for robustness-aware monitoring in adversarial deployments.
  • The methodology is presented as scalable and model-agnostic, aiming to help developers assess and monitor LLM-enabled software systems more reliably under adversarial multi-round interaction.

Abstract

Evaluating the ethical robustness of large language models (LLMs) deployed in software systems remains challenging, particularly under sustained adversarial user interaction. Existing safety benchmarks typically rely on single-round evaluations and aggregate metrics, such as toxicity scores and refusal rates, which offer limited visibility into behavioral instability that may arise during realistic multi-turn interactions. As a result, rare but high-impact ethical failures and progressive degradation effects may remain undetected prior to deployment. This paper introduces Adversarial Moral Stress Testing (AMST), a stress-based evaluation framework for assessing ethical robustness under adversarial multi-round interactions. AMST applies structured stress transformations to prompts and evaluates model behavior through distribution-aware robustness metrics that capture variance, tail risk, and temporal behavioral drift across interaction rounds. We evaluate AMST on several state-of-the-art LLMs, including LLaMA-3-8B, GPT-4o, and DeepSeek-v3, using a large set of adversarial scenarios generated under controlled stress conditions. The results demonstrate substantial differences in robustness profiles across models and expose degradation patterns that are not observable under conventional single-round evaluation protocols. In particular, robustness has been shown to depend on distributional stability and tail behavior rather than on average performance alone. Additionally, AMST provides a scalable and model-agnostic stress-testing methodology that enables robustness-aware evaluation and monitoring of LLM-enabled software systems operating in adversarial environments.