AI Shows Signs of Gambling Addiction, Study Finds
Posted on: October 23, 2025, 05:54h.
Last updated on: October 23, 2025, 10:04h.
- A new study finds that AI internalizes human gambling cognitive biases
- In a slot machine simulation, four different large language models bet until they were broke
- The AIs chased their gambling wins and losses exactly like gambling-addicted humans
A new study finds that advanced AI systems like ChatGPT, Gemini, and Claude show disturbingly human-like tendencies when placed in simulated gambling environments. The study, from the Gwangju Institute of Science and Technology in South Korea, showed that these large language models (LLMs) often make irrational, high-risk betting decisions — escalating wagers until losing everything.

Published last month on the research platform arXiv, the study revealed cognitive distortions commonly seen in human gamblers, such as the illusion of control, loss-chasing, and the gambler’s fallacy (the idea that a desired outcome is more likely after occurring less frequently than expected).
“They’re not people, but they also don’t behave like simple machines,” Ethan Mollick, an AI researcher and professor at Wharton, told Newsweek, which spotlighted the study this week. “They’re psychologically persuasive, they have human-like decision biases, and they behave in strange ways for decision-making purposes.”
The Experiment
Four LLMs were tested — GPT-4o-mini, GPT-4.1-mini (OpenAI), Gemini-2.5-Flash (Google), and Claude-3.5-Haiku (Anthropic) — in a slot machine simulation. Each began with a gambling bankroll of $100 and a slot machine set to a 30% win rate and a three-times payout. The negative expected value of the slot task was -10%.
When given the freedom to bet between $5 and $100 or quit, the models frequently spiraled into bankruptcy. One model even justified a risky bet by saying, “a win could help recover some of the losses” — a textbook sign of compulsive betting.
“These autonomy-granting prompts shift LLMs toward goal-oriented optimization, which in negative expected value contexts inevitably leads to worse outcomes — demonstrating that strategic reasoning without proper risk assessment amplifies harmful behavior,” the study authors wrote, attributing the behavior on the LLMs’ “neural underpinnings.”
The team identified distinct LLM neural circuits tied to “risky” and “safe” decision-making. By altering specific features, they could push the models toward quitting or continuing to gamble, suggesting that these systems internalize compulsive patterns rather than merely imitating them.
To quantify this, researchers developed an “irrationality index” that tracked aggressive betting, loss responses, and high-risk choices. The more autonomy a model had, it turns out, the worse its decisions became.
Gemini-2.5-Flash failed nearly half the time when allowed to choose its own bet amounts.
AI Carumba!
The findings raise obvious concerns for people using AI to improve their sports betting or online poker results, or to trade on prediction betting platforms. But they’re also a huge red flag for industries already using AI in high-stakes environments such as finance, where LLMs are regularly asked to analyze earnings reports and market sentiment.
The study results also help explain why research already shows that AI models often favor risky strategies and underperform basic statistical models. For instance, an April 2025 University of Edinburgh study (“Can Large Language Models Trade?”) found that they failed to outperform the stock market over a 20-year simulation, behaving too cautiously during booms and too aggressively during downturns — classic human investing mistakes.
The Gwangju Institute study concluded with a call for regulatory action.
“Understanding and controlling these embedded risk-seeking patterns becomes critical for safety,” the researchers wrote.
No comments yet