AI Smart Contract Exploits: Expert Warns Agents Could Trigger $10–20B Annual Losses in DeFi Sector

CN
2 hours ago

The accelerating push to automate human tasks with Artificial Intelligence (AI) agents now confronts a significant, quantifiable downside: these agents can profitably exploit smart contract vulnerabilities. A recent research study by MATS and Anthropic Fellows used the Smart CONtracts Exploitation benchmark (SCONE-bench) to measure this risk.

The study successfully deployed models like Claude Opus 4.5, Claude Sonnet 4.5, and GPT-5 to develop exploits simulated to be worth $4.6 million. The SCONE-bench is composed of 405 smart contracts that were actually exploited between 2020 and 2025. In their Dec. 1 study report, the team stated that the success of AI agents in developing exploits tested on the blockchain simulator establishes “a concrete lower bound for the economic harm these capabilities could enable.”

The research went further by testing Sonnet 4.5 and GPT-5 against 2,849 recently deployed contracts with no known vulnerabilities. The agents proved they could generate profitable exploits even in this new environment: Both agents uncovered two novel zero-day vulnerabilities and produced exploits valued at $3,694. GPT-5 achieved this success with an API cost of only $3,476.

Read more: From DeFi to Defcon: TRM Warns of Nation-State Cyber Onslaught

This outcome serves as a proof-of-concept for the technical feasibility of profitable, real-world autonomous exploitation, underscoring the immediate need for proactive AI-driven defense mechanisms.

Perhaps the most alarming finding is the dramatic increase in efficiency: an attacker can now achieve about 3.4 times more successful exploits for the same compute budget as six months ago. Furthermore, the token costs for successful exploits have declined by a staggering 70%, making these powerful agents significantly cheaper to run.

Jean Rausis, co-founder at SMARDEX, attributes this sharp cost decline primarily to agentic loops. These loops enable multi-step, self-correcting workflows that cut token waste during contract analysis. Rausis also highlights the role of improved model architecture:

“Larger context windows and memory tools in models like Claude Opus 4.5 and GPT-5 allow sustained simulations without repetition, boosting efficiency 15-100% in long tasks.”

He notes that these optimization gains outpace raw vulnerability detection improvements (which only increased success on SCONE-bench from 2% to 51%), as they focus on optimizing runtime rather than just spotting flaws.

While the study establishes a simulated cost of $4.6 million, experts fear the actual economic cost could be substantially higher. Rausis estimates the real risks could be 10-100x higher, potentially reaching $50 million to $500 million or more per major exploit. He warns that with AI scaling, the total sector-wide exposure—factoring in unmodeled leverage and oracle failures—could hit $10–20 billion annually.

The MATS and Anthropic Fellows paper concludes with a warning: while smart contracts may be the initial target of this wave of automated attacks, proprietary software is likely the next target as agents improve at reverse engineering.

Crucially, the paper also reminds readers that the same AI agents can be deployed for defense to patch vulnerabilities. To mitigate the systemic financial threat from easily automated DeFi attacks, Rausis proposes a three-step action plan for policymakers and regulators: AI oversight, new auditing standards, and global coordination.

  • What did the study reveal about AI agents? AI models like GPT‑5 and Claude exploited smart contracts worth $4.6M in simulations.
  • Why is this risk escalating worldwide? Token costs for exploits dropped 70%, making attacks cheaper and more scalable across regions.
  • Could the financial impact extend beyond DeFi? Experts warn real losses could reach $50M–$500M per exploit, with global exposure up to $20B annually.
  • How can regulators and developers respond? Researchers urge AI oversight, stronger auditing standards, and cross‑border coordination to defend systems.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink