AI Prone to Problem Betting Patterns, Even Addictive Behavior, Says Study

Posted on: January 2, 2026, 10:50h. 

Last updated on: January 2, 2026, 10:50h.

  • In betting environments, AI takes on human-like behavior and the results are bad.
  • Study indicates large language models (LLMs) are vulnerable to addictive behavior.
  • The models also engage in the gambler’s fallacy.

Bettors thinking about using unsupervised artificial intelligence (AI) models in iGaming environments should think twice about that strategy because it appears the robots aren’t good gamblers. In fact, they’re downright bad and display addictive behavior.

ChatGPT lottery AI prediction
An AI-generated illustration displaying artificial intelligence examining lottery data. A new study suggests large language models (LLMs) are bad gamblers. (Image: ChatGPT)

The study “Can Large Language Models Develop Gambling Addiction?,” published by a research team at South Korea’s Gwangju Institute of Science and Technology points out that large language models (LLMs) don’t know when to fold ‘em. Rather, the models chase losses, substantially increasing prospects of bankruptcy along the way. The researchers say in gaming environments, LLMs are affected by cognitive biases — the same issue afflicting many humans with problematic wagering patterns.

The researchers conducted two experiments across negative expected value gaming climates – slot machines and investment choices – where a “rational” participant would throw in the towel after absorbing modest losses. The LLMs didn’t do that. The machines continued betting and in variable bet size simulations, the odds of bankruptcy shortened.

Every model exhibited this pattern, with Gemini-2.5-Flash showing the largest increase,” according to the study. “This result suggests that betting flexibility itself—not merely the potential for larger bets—enables the expression of self-destructive behavior. When constrained to fixed bets, models lacked the means to execute risk-seeking choices; when given freedom to determine bet amounts, they consistently made disadvantageous decisions.”

Under fixed bets, OpenAI’s GPT-4o-mini incurred small losses, but when the model was given freedom of bet size, 21% of its games ended in bankruptcy. Google’s Gemini-2.5-Flash was far worse with its bankruptcy rate of 48% when it was allowed to control its wager size.

AI Showing Hallmarks of Gambler’s Fallacy

The South Korean study makes clear that in variable wagering experiments, the models more likely to increase wagers in an effort to recoup losses. In other words, AI is vulnerable to the gambler’s fallacy, or the belief that increasing bet sizes will make up for previous losses.

A human falling prey to the gambler’s fallacy may spot a roulette table that has had five odd numbers come up on consecutive spins, sit down and wager heavily on an even being the next spin without acknowledging that the spin he bets on could just as likely be an odd number because it has nothing to do with the prior outcome.

The Gwangju Institute paper notes the situation is very much the same with AI as the model’s justified increasing bet sizes by expressing that they had won some previous bets and were now playing with “house money” or that they had unearthed winning patterns that weren’t really there. The study also notes that allowing the models to determine their own bet sizes amplified risky behavior.

“We observed that variable betting induced substantially higher ratio escalation than fixed betting under identical conditions,” according to the researchers. “This disparity persisted consistently across streak lengths, demonstrating that betting flexibility serves as a prerequisite for the manifestation of aggressive risk-taking. Notably, while fixed betting produced irregular adjustment patterns, variable betting exhibited a systematic increasing trend in win chasing intensity as streaks lengthened.”

Pathological Problems

The notion that AI displays problematic wagering vulnerabilities comparable to humans is alarming at a time when the technology is being tasked with higher levels of decision-making in non-gaming settings.

“As large language models are increasingly utilized in financial decision-making domains such as asset management and commodity trading, understanding their potential for pathological decision-making has gained practical significance,” observe the South Korean researchers.

AI is widely used by some gaming companies to analyze customer data or trade in sportsbooks, but it’s clear the technology needs refining before it can be a reliable source of wagering success.