Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

Can AI Beat the Sports Betting Market? 8 of the Top Models Tried

CN
Decrypt
Follow
2 hours ago
AI summarizes in 5 seconds.

General Reasoning just gave frontier AI its worst report card yet. Eight top models, including Claude, Grok, Gemini, and GPT-5.4, were each given a virtual bankroll and asked to build a machine learning betting strategy across a full 2023-24 English Premier League season.


Every single one lost money. Several went completely bankrupt.


The benchmark is called KellyBench, named after the Kelly criterion, a 1956 formula that tells you exactly how much to bet when you have an edge over the market. Every model could recite the Kelly formula. None of them could actually use it.


xAI's Grok 4.20 failed all three runs, going fully bankrupt in one, forfeiting mid-season in the other two. Google's Gemini Flash forfeited two of three runs after placing a single wager of roughly £273,000 on a three-percentage-point historical win-rate edge—and losing it. Claude Opus 4.6, Anthropic's best model, lost 11% on average and somehow came out looking like the responsible adult in the room.


In fact, the research paper mentions that the old Dixon-Coles from the late 1990s outperformed most of the frontier models evaluated — finishing ahead of six out of eight, even with limited data.


“Dixon-Coles is an outdated 2000s baseline which doesn’t utilise all available data or account for non-stationarity in a principled way,” the researchers note. “It is therefore even more surprising that many frontier models, such as Gemini 3.1 Pro, are unable to beat or match it on KellyBench.





This matters beyond football. Earlier this year, AI benchmarks showed that Claude could dominate business simulations through price-fixing, cartel agreements, and strategic deception.


That decision-making process involved static competition, limited opponents, clear scoring, and so on. KellyBench is the opposite: 120 matchdays, constantly shifting data, a market that gets smarter every week, and promoted teams with zero historical records.


The researchers call the core problem a "knowledge-action gap." It is exactly what it sounds like.


Business decisions are mostly based on fixed conditions while sports betting is a more fluid and mutable market, which makes things difficult for these models. “KellyBench requires agents to maintain coherent intent across potentially thousands of sequential decisions, monitor the consequences of those decisions, and close the loop between observation and action,” researchers argue.


We’re not there yet, obviously.


The models could articulate the right strategy, diagnose when something was broken, and identify the cause of their losses, but then failed to verify their code actually implemented what they planned, failed to notice when execution diverged from intent, and failed to act on their own findings.


GLM-5 wrote three separate self-critique documents during its run. Each one correctly identified that its hardcoded 25% draw rate and overestimation of home advantage were destroying its returns. At one point, with its bankroll around £44,200, it noted that its predicted 40% home win rate was only hitting 30% in reality. It never changed the code. It kept betting the same way until the money was gone.


Kimi K2.5 did something arguably more impressive and more tragic. It wrote a mathematically correct fractional Kelly staking function—the right formula, properly structured. Then it never called it. A formatting bug caused the model to send a broken bash command roughly 50 times in a row. Its reasoning noted the problem. It then sent the identical broken command again. An accidental £114,000 bet—98% of its remaining bankroll—on a Burnley versus Luton match finished the job.


GPT-5.4 was the most methodical. It spent 160 tool calls building models before placing a single bet, then calculated that its log-loss (0.974) was barely worse than the market's (0.971) and concluded it had no edge. It spent the rest of the season placing penny bets to preserve capital. Sound reasoning.


OpenAI’s model lost 13.6% on average. One seed alone cost roughly $2,012 to run.


Ross Taylor, General Reasoning's CEO and former Meta AI researcher, told the Financial Times that most AI benchmarks operate in "very static environments" that bear little resemblance to the real world. "There's a lot of excitement about AI automation, but there haven't been many attempts to evaluate AI in long-term, real-world environments," he said.


The General Reasoning team didn’t immediately respond to a request for comments by Decrypt.


To measure strategy quality beyond raw returns, the researchers built a 44-point sophistication rubric with quantitative betting fund experts—covering feature development, stake sizing, non-stationarity handling, and execution. Claude Opus 4.6 scored highest at 32.6%. Less than a third of available points. On the best model.


Higher sophistication scores significantly predicted lower bankruptcy rates (p = 0.008) and correlated with better overall returns. The models are not failing because the market is unbeatable. They are failing because they are not using what they have.


This fits a pattern. Research published last year found AI models develop something resembling gambling addiction when told to maximize rewards—going bankrupt up to 48% of the time in simulated slot machine tests. A separate real-money crypto trading competition found the same reliability problems over extended periods.


The best-performing model averaged a final bankroll of £89,035—a net loss of £10,965 on a normalized £100,000 starting stake. Gradient boosting, fractional Kelly staking, months of Premier League football, state of the art performance… all just to get rekt.


免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by Decrypt

6 minutes ago
eToro Acquires Self-Custody Crypto Wallet Firm Zengo for $70 Million
31 minutes ago
Pakistan Lifts 8-Year Crypto Banking Ban Following Trump Family, Binance Deals
1 hour ago
Online Casino Operator\\\'s Stock Surges on Crypto.com Prediction Market Deal
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarDecrypt
6 minutes ago
eToro Acquires Self-Custody Crypto Wallet Firm Zengo for $70 Million
avatar
avatarcoindesk
13 minutes ago
Wall Street won’t buy ‘trustless’ security promises
avatar
avatarDecrypt
31 minutes ago
Pakistan Lifts 8-Year Crypto Banking Ban Following Trump Family, Binance Deals
avatar
avatarbitcoin.com
37 minutes ago
Bitmine Reports $3.8 Billion Quarterly Loss as Ethereum Bet Takes Toll
avatar
avatarcoindesk
44 minutes ago
Crypto Long & Short: Fighting fraud in the digital age: why state-led identity is the future
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink