Same prompt, different morals: how frontier AI models diverge on ethical dilemmas

THE DECODER / 5/3/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • A new benchmark evaluates leading language models across 100 everyday ethical scenarios, ranging from sales data misuse to protocol violations in oncology.
  • The results show that frontier AI models can diverge in how they handle ethical dilemmas even when given the same prompt.
  • The benchmark raises a core governance question about who sets the rules for what an AI is allowed to do.
  • It also highlights that model behavior may implicitly follow different ethical frameworks, implying “whose ethics” the system ultimately adopts.
  • By focusing on varied ethical outcomes, the study suggests ethical alignment is not uniform across frontier models and depends on design or training choices.

A new benchmark puts leading language models through 100 everyday ethical scenarios, from data misuse in sales to protocol violations in oncology. Behind the results lies a bigger question: who decides what an AI is allowed to do, and whose ethics does it follow?

The article Same prompt, different morals: how frontier AI models diverge on ethical dilemmas appeared first on The Decoder.