OpenAI's safety brain drain finally gets an explanation and it's just Sam Altman's vibes

THE DECODER / 4/7/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisIndustry & Market Moves

Key Points

  • A new New Yorker profile, based on more than 100 interviews, quotes Sam Altman explaining why OpenAI’s safety researchers repeatedly leave.
  • Altman frames the “brain drain” as partly driven by role fit and differing expectations about safety work rather than solely organizational failure.
  • The story suggests that disagreements about commitments—described as potentially “deception” by critics—are presented as an accepted part of the safety job by the people inside it.
  • Overall, it portrays OpenAI’s safety staffing challenges as an internal culture/communication and incentives problem reflected in leadership “vibes.”

"My vibes don't really fit." In a new New Yorker profile based on over 100 interviews, Sam Altman explains why safety researchers keep leaving OpenAI and why shifting commitments others might call deception are just part of the job.

The article OpenAI's safety brain drain finally gets an explanation and it's just Sam Altman's vibes appeared first on The Decoder.