Persona-Conditioned Risk Behavior in Large Language Models: A Simulated Gambling Study with GPT-4.1
arXiv cs.AI / 3/18/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The study assigns GPT-4.1 three socioeconomic personas (Rich, Middle-income, Poor) and tests it in a structured slot-machine environment with three configurations (Fair 50%, Biased Low 35%, Streak with increasing probability after losses), across 50 iterations per condition for a total of 6,950 decisions.
- Results show the model reproduces Prospect Theory–like risk behavior without being instructed to do so, with the Poor persona averaging 37.4 rounds and the Rich persona averaging 1.1 rounds (p < 2.2e-16, Kruskal-Wallis H = 393.5).
- Risk scores exhibit large effect sizes (Cohen's d = 4.15 for Poor vs Rich); emotional labels appear to be post-hoc annotations rather than decision drivers, and belief updating across rounds is negligible (Spearman rho = 0.032, p = 0.016).
- The findings have implications for LLM agent design, interpretability research, and the broader question of whether classical cognitive biases are implicitly encoded in large-scale pretrained language models.
Related Articles
Astral to Join OpenAI
Dev.to

PearlOS. We gave swarm intelligence a local desktop environment and code control to self-evolve. Has been pretty incredible to see so far. Open source and free if you want your own.
Reddit r/LocalLLaMA

Why Data is Important for LLM
Dev.to
The Inference Market Is Consolidating. Agent Payments Are Still Nobody's Problem.
Dev.to
YouTube's Deepfake Shield for Politicians Changes Evidence Forever
Dev.to