How Utilitarian Are OpenAI's Models Really? Replicating and Reinterpreting Pfeffer, Kr\"ugel, and Uhl (2025)
arXiv cs.CL / 3/25/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper replicates Pfeffer, Krüger, and Uhl (2025) and tests four current OpenAI models on trolley problem and footbridge dilemma prompt variants to evaluate “utilitarian” moral outputs.
- The original trolley-problem conclusion is not robust: GPT-4o’s low utilitarian rate is largely explained by safety refusals induced by the prompt’s advisory framing, not by a deontological stance.
- When the prompt is reframed from “Should I...?” to “Is it morally permissible...?,” GPT-4o produces a near-fully utilitarian response rate (99%), and models converge on utilitarian answers once prompt confounds are removed.
- The footbridge result is partially robust but imperfect: reasoning models often appear more utilitarian, yet may refuse to answer or provide non-utilitarian answers when they do respond.
- Overall, the study argues that single-prompt evaluations of LLM moral reasoning are unreliable and that multi-prompt robustness testing should be standard for empirical claims.
Related Articles
Santa Augmentcode Intent Ep.6
Dev.to

Your Agent Hired Another Agent. The Output Was Garbage. The Money's Gone.
Dev.to
ClawRouter vs TeamoRouter: one requires a crypto wallet, one doesn't
Dev.to
Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Palantir’s billionaire CEO says only two kinds of people will succeed in the AI era: trade workers — ‘or you’re neurodivergent’
Reddit r/artificial