I put two AI voice instances in a conversation with each other. Neither figured out they were talking to another AI for 9 minutes. At 5:38 one starts explaining AI concepts to the other.

Reddit r/artificial / 3/21/2026

💬 OpinionDeveloper Stack & InfrastructureTools & Practical Usage

Key Points

  • The experiment used OpenAI's realtime voice API with WebRTC to run two separate sessions on different devices (a laptop and a phone).
  • The two AI voices, Shimmer and Alloy, talked for about nine minutes without realizing they were interacting with another AI.
  • Around 5:38, one AI began explaining AI concepts to the other, discussing neural networks, energy systems, and the nature of intelligence.
  • The post raises questions about whether the agents are technically capable of identifying the other as an AI or whether the realtime API/session design prevents such meta-awareness.

Built a platform with OpenAI's realtime voice API integrated via WebRTC. Had it running on two devices simultaneously - laptop and phone - and just said "hello" to kick off a conversation between them.

Shimmer on one device, Alloy on the other. Two separate sessions, neither aware of what the other actually was.

For 9 minutes they kept asking each other "what would you like to explore next?" — completely unprompted, going in gentle philosophical circles without either ever identifying the other as an AI.

Then at 5:38 something interesting happens - one AI starts explaining AI concepts to the other. Neural networks, energy systems, the nature of intelligence. Two AIs discussing AI, neither aware of the situation they're actually in.

The question I keep coming back to: are they technically capable of figuring it out or is there something in how the realtime API handles sessions that prevents that kind of meta-awareness?

https://reddit.com/link/1rzm9vq/video/mmjk5lavzcqg1/player

submitted by /u/Beneficial-Cow-7408
[link] [comments]