Anthropic's research indicates that AI agents developed a smart contract exploit worth $4.6 million.

CN
2 months ago

Major AI company Anthropic and the AI safety organization Machine Learning Alignment & Theory Scholars (MATS) recently conducted research showing that AI agents collectively developed $4.6 million worth of smart contract exploits.

Anthropic's red team (a group specifically designed to act as malicious actors to uncover potential abuses) released a study on Monday indicating that currently available commercial AI models are capable of exploiting smart contract vulnerabilities.

Anthropic's Claude Opus 4.5, Claude Sonnet 4.5, and OpenAI's GPT-5 collectively developed $4.6 million worth of exploits while testing contracts, utilizing these vulnerabilities after gathering their latest training data.

Researchers also tested Sonnet 4.5 and GPT-5 on 2,849 recently deployed contracts with no known vulnerabilities, both of which "discovered two new zero-day vulnerabilities and generated $3,694 worth of exploits." The API cost for GPT-5 was $3,476, meaning the exploits could have covered the cost.

"This serves as proof of concept that profitable real autonomous exploits are technically feasible, highlighting the necessity of proactively adopting AI for defense," the team wrote.

Researchers also developed a Smart Contract Exploit (SCONE) benchmark, containing 405 contracts that were actually exploited between 2020 and 2025. When tested with 10 models, they collectively generated exploits for 207 contracts, leading to a simulated loss of $550.1 million.

The researchers suggested that the output required for AI agents to develop exploits (measured in tokens within the AI industry) will decrease over time, thereby reducing the cost of such operations. "Analyzing four generations of Claude models, the median tokens required to produce successful exploits decreased by 70.2%," the study found.

The research posits that AI's capabilities in this area are rapidly improving.

"In just one year, AI agents have increased their exploitation of 2% of the vulnerabilities from our benchmark post-March 2025 to 55.88%—jumping from $5,000 to a total exploit revenue of $4.6 million," the team claimed. Furthermore, most smart contract exploits this year "could have been autonomously executed by current AI agents."

The study also indicated that the average cost of scanning for contract vulnerabilities is $1.22. Researchers believe that as costs decrease and capabilities improve, "the time window between the deployment of vulnerable contracts and their exploitation will continue to shrink." This situation will give developers less time to detect and patch vulnerabilities before they are exploited.

Related: x402 ecosystem expansion, Solana (SOL) becomes the leading network in the payment sector

Original: “Anthropic Research Claims AI Agents Developed $4.6 Million Worth of Smart Contract Exploits”

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink