On-device AI changes how people behave with sensitive data. I noticed this while building a therapy prep voice agent

Reddit r/artificial / 5/5/2026

💬 OpinionSignals & Early TrendsIdeas & Deep Analysis

Key Points

  • The author built an on-device voice AI agent for therapy preparation that runs on Apple Intelligence without cloud inference, keeping sensitive data on the phone.
  • They observed that users change how they interact when they understand that inference is local, sharing more freely than they would with cloud-based apps.
  • The shift appears to go beyond a simple privacy preference, affecting the depth and willingness of users to disclose information to an AI agent.
  • The author asks others developing AI products whether they are seeing similar behavioral differences based on where inference occurs.
  • The related app mentioned for context is “Prelude,” available on the App Store.

Something worth discussing in the context of where AI is heading.

I built a voice agent for therapy prep. It runs a conversation before your session, surfaces what’s on your mind, generates a brief. The entire stack runs on-device using Apple Intelligence. No cloud inference, no data leaving the phone.

What I didn’t expect: people interact differently when they know inference is local. The same person who’d hesitate to type their pre-therapy thoughts into a cloud app will speak freely when they know nothing is transiting a server. It’s not just a privacy preference. It changes the depth of what people are willing to share with an AI agent.

Curious whether others building AI products have noticed behavioral differences based on where inference happens.

App is called Prelude if anyone wants context: https://apps.apple.com/us/app/prelude-therapy-prep/id6761587576

submitted by /u/Emojinapp
[link] [comments]