Why OpenAI really shut down Sora
TechCrunch / 3/30/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisTools & Practical UsageIndustry & Market Moves
Key Points
- OpenAI’s shutdown of Sora just six months after its public release quickly triggered questions about whether the tool’s public face-upload feature was meant to collect data from users.
- The article frames the abrupt removal as a response to underlying issues that became apparent after launch, rather than a simple “data grab” narrative.
- It highlights how privacy and consent concerns can rapidly escalate when AI tools allow user-provided biometric-like inputs such as faces.
- The piece suggests the decision reflects broader risks and operational constraints involved in deploying generative video systems at scale.
- Overall, the incident is positioned as an early signal of the market and governance pressures shaping how generative AI products are maintained or withdrawn.
OpenAI's decision last week to shut down Sora, its AI video-generation tool, just six months after releasing it to the public raised immediate suspicions. The app had invited users to upload their own faces — so was this some kind of elaborate data grab?
Related Articles

Black Hat Asia
AI Business
The Brand Gravity Anomaly: Uncovering AI Developer Friction with a 5-Organ Swarm and Notion MCP
Dev.to
Hyper-Personalization in Action: AI-Driven Media Lists
Dev.to
Learning Thermodynamics with Boltzmann Machines
Dev.to
Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to