Is anyone else concerned with this blatant potential of security / privacy breach?

Reddit r/artificial / 3/31/2026

💬 OpinionSignals & Early TrendsIdeas & Deep Analysis

Key Points

  • A Reddit user raises concerns that people may copy sensitive emails into ChatGPT (or similar AI tools) to get feedback or vent, unintentionally exposing private health information.
  • The post questions how often this scenario occurs in practice and whether major AI providers like OpenAI and Anthropic have effective safeguards to prevent privacy leaks.
  • It highlights the risk that AI systems can receive and potentially retain or process personal data submitted by users, which could then be shared or misused.
  • The underlying issue is whether current guardrails, user guidance, and privacy protections are sufficient for handling highly sensitive data inputs.

Recently, when sending a very sensitive email to my brother including my mother’s health information, I wondered what happens if a recipient copied and pasted the email into say ChatGPT to get its perspective / vent. ChatGPT then has a host of personal information that could then be shared with others.

I wonder how often this happens and if any guard rails are in place by large AI companies like OpenAI/Anthropic.

submitted by /u/Bubbly-Air7302
[link] [comments]