Depictions of Depression in Generative AI Video Models: A Preliminary Study of OpenAI's Sora 2

arXiv cs.AI / 3/23/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper characterizes how OpenAI's Sora 2 generative video model depicts depression and compares consumer App outputs to developer API outputs using 100 videos prompted by the word Depression.
  • App outputs show a recovery bias (78% of videos progress toward resolution) and higher motion and brightness over time compared with API outputs, indicating platform constraints influence narrative style.
  • Across both modalities, videos use a narrow set of visual vocabularies with recurring objects (hoodies, windows, rain) and feature predominantly young adults, largely solitary figures, with gender skew varying by access point (App male 68%, API female 59%).
  • The authors conclude that Sora 2 does not invent new visual grammars but blends existing iconography, with platform constraints shaping what content reaches users, and caution that clinicians and patients should interpret AI-generated mental-health content in light of training data and design choices.

Abstract

Generative video models are increasingly capable of producing complex depictions of mental health experiences, yet little is known about how these systems represent conditions like depression. This study characterizes how OpenAI's Sora 2 generative video model depicts depression and examines whether depictions differ between the consumer App and developer API access points. We generated 100 videos using the single-word prompt "Depression" across two access points: the consumer App (n=50) and developer API (n=50). Two trained coders independently coded narrative structure, visual environments, objects, figure demographics, and figure states. Computational features across visual aesthetics, audio, semantic content, and temporal dynamics were extracted and compared between modalities. App-generated videos exhibited a pronounced recovery bias: 78% (39/50) featured narrative arcs progressing from depressive states toward resolution, compared with 14% (7/50) of API outputs. App videos brightened over time (slope = 2.90 brightness units/second vs. -0.18 for API; d = 1.59, q < .001) and contained three times more motion (d = 2.07, q < .001). Across both modalities, videos converged on a narrow visual vocabulary and featured recurring objects including hoodies (n=194), windows (n=148), and rain (n=83). Figures were predominantly young adults (88% aged 20-30) and nearly always alone (98%). Gender varied by access point: App outputs skewed male (68%), API outputs skewed female (59%). Sora 2 does not invent new visual grammars for depression but compresses and recombines cultural iconographies, while platform-level constraints substantially shape which narratives reach users. Clinicians should be aware that AI-generated mental health video content reflects training data and platform design rather than clinical knowledge, and that patients may encounter such content during vulnerable periods.