Persona-Conditioned Risk Behavior in Large Language Models: A Simulated Gambling Study with GPT-4.1
arXiv cs.AI / 3/18/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The study assigns GPT-4.1 three socioeconomic personas (Rich, Middle-income, Poor) and tests it in a structured slot-machine environment with three configurations (Fair 50%, Biased Low 35%, Streak with increasing probability after losses), across 50 iterations per condition for a total of 6,950 decisions.
- Results show the model reproduces Prospect Theory–like risk behavior without being instructed to do so, with the Poor persona averaging 37.4 rounds and the Rich persona averaging 1.1 rounds (p < 2.2e-16, Kruskal-Wallis H = 393.5).
- Risk scores exhibit large effect sizes (Cohen's d = 4.15 for Poor vs Rich); emotional labels appear to be post-hoc annotations rather than decision drivers, and belief updating across rounds is negligible (Spearman rho = 0.032, p = 0.016).
- The findings have implications for LLM agent design, interpretability research, and the broader question of whether classical cognitive biases are implicitly encoded in large-scale pretrained language models.
Related Articles

Interactive Web Visualization of GPT-2
Reddit r/artificial
Stop Treating AI Interview Fraud Like a Proctoring Problem
Dev.to
[R] Causal self-attention as a probabilistic model over embeddings
Reddit r/MachineLearning
The 5 software development trends that actually matter in 2026 (and what they mean for your startup)
Dev.to
InVideo AI Review: Fast Finished
Dev.to