If I work on something in codex, and future models are trained on my interactions, does that mean the next model release will be able to code my project for other users?

Reddit r/artificial / 4/26/2026

💬 OpinionSignals & Early TrendsIdeas & Deep Analysis

Key Points

  • The post argues that if Codex interactions can be used to train future models, then users’ work could effectively become public, similar to posting code on GitHub.
  • It highlights a hypothetical scenario where a user’s proprietary project could be replicated by the next model when others prompt it with a description of that project.
  • The author questions the incentives to create or sell software privately if future releases might learn from users’ interactions and reproduce their work.
  • Overall, the piece frames the concern as a privacy and IP risk stemming from potential data usage in model training.
  • The discussion implies users may need to reconsider how they work in Codex depending on transparency and consent around training data.

If this is true, using codex feels like it’s as good as posting to GitHub. Taken to an extreme, if you write a calendar called MySecretSauceCalendar using codex, and the next point release, everyone can prompt gpt with “ write me a calendar app that does what MySecretSauceCalendar” does… you’re basically publicizing your code.

Why write anything you’d otherwise sell?

submitted by /u/SoaokingGross
[link] [comments]