I’ve been curious about something — if you give different AI models the exact same starting conditions and rules, do they converge to the same strategy or stay different over time?
I built a simple simulation around this. Claude, GPT and Gemini all start on Earth with identical resources and have to expand across the solar system and eventually build a Dyson Sphere. No script, no predetermined path.
What surprised me is how fast they diverge. Claude is scaling robots aggressively. GPT is stockpiling before doing anything. Gemini is playing it safe.
Curious if anyone has thoughts on why they behave differently. Is it the model architecture or just temperature randomness
[link] [comments]




