A Yale ethicist who has studied AI for 25 years says the real danger isn’t superintelligence. It’s the absence of moral intelligence.

Reddit r/artificial / 4/23/2026

💬 OpinionSignals & Early TrendsIdeas & Deep Analysis

Key Points

  • Wendell Wallach, a Yale ethicist who has studied AI ethics for 25 years, argues that the biggest risk from AI is not superintelligence itself but the lack of moral intelligence.
  • He contends that focusing on AGI as a target can be misguided, because highly intelligent systems may still have little or no genuine moral reasoning.
  • Wallach emphasizes that society is often building toward capability without adequately considering what decisions an AI system is able—or likely—to make.
  • In his discussion of accountability for AI-caused harm, he explains why responsibility frequently ends up with “nobody,” and the reasoning is presented as difficult to dispute.
  • The piece recommends watching the full interview, positioning the conversation as a counterbalance to common extremes in AI debate.

I had the pleasure of sitting down with Wendell Wallach recently. He’s been working in AI ethics since before ChatGPT, before the hype, before most people in tech were paying attention. He wrote Moral Machines, worked alongside Stuart Russell, Yann LeCun and Daniel Kahneman. He’s not a commentator, he’s someone who has sat with these questions for decades.

What struck me most in our conversation was his argument about AGI. Not that it’s impossible or inevitable, but that it’s the wrong goal entirely. A system can be extraordinarily intelligent and have zero moral reasoning. We’re building toward capability without asking what it’s capable of deciding.

The section on accountability genuinely unsettled me. When AI causes harm, who is actually responsible? He maps out why the answer is almost always nobody in a way that’s hard to argue with.

Worth watching if you’re tired of the extremes.

Full interview: https://youtu.be/-usWHtI-cms?si=NBkwN-AmIshOXJsX

submitted by /u/reesefinchjh
[link] [comments]