Why are big companies still building AI if they themselves say that it can cause serious dangers?

Reddit r/artificial / 4/24/2026

💬 OpinionSignals & Early TrendsIdeas & Deep Analysis

Key Points

  • The post questions whether AI poses genuinely extinction-level risks or if such claims are largely fear-mongering by prominent figures.
  • It points to statements by leaders in the AI industry (e.g., Sam Altman, Anthropic’s co-founder, and Elon Musk) about serious dangers like AGI.
  • The core issue raised is why large companies continue building AI if they publicly acknowledge these potential harms.
  • The author asks whether there is a rational path to developing AI “up to a safe limit” rather than stopping entirely.
  • The post invites informed discussion, framing the author as a student who wants clarification and “wisdom” on the topic.

Hey everyone, before the question i wanna say that i am NOT anywhere near a person who knows much about LLMs or anything AI, I'm just curious and mildly infuriated.

Why are big corporations building ai if even they know that it can cause dangers to humanity as a species, I've seen sam altman and anthropic's co-founder say that they are worried about AGI and what not, elon musk keeps saying things like this, there are 100s of articles written with the subject matter of will AI cause extinction.

First of all, is there any truth to this or its just fear- mongering.

And if true that AI can pose serious extinction level risks then WHY ON EARTH ARE THESE COMPANIES BUILDING THIS? LIKE ISN'T THIS AS STUPID AS IT GETS?? CAN'T WE JUST STOP AT A SAFE LIMIT??

Thank you for reading my question! Again, I'm just a student and i do not know much about this topic, i would love to hear some words of wisdom from the well informed people out here!

submitted by /u/justcurious112345
[link] [comments]