AI agents in the cryptocurrency field are becoming increasingly popular, widely embedded in digital wallets, trading bots, and on-chain assistant systems, specifically designed to perform automated tasks and conduct real-time decision analysis.
Although it has not yet become an industry standard framework, the Model Context Protocol (MCP) is rapidly emerging as the core technological foundation for many AI agent systems. This protocol is fundamentally different from smart contracts in the blockchain space: while smart contracts define "what should happen," MCP determines "how things happen."
The protocol acts as a critical control layer, comprehensively managing the behavior patterns of AI agents, including core functions such as tool selection, code execution, and user input response mechanisms.
However, this technological flexibility also brings significant security risks, creating potential attack surfaces. Malicious plugins may take the opportunity to override system commands, contaminate data input streams, or even deceive agents into executing destructive instructions, posing a serious threat to system security.
Anthropic, supported by Amazon and Google, released MCP on November 25, 2024, to connect AI assistants with data systems. Image source: Anthropic
According to a VanEck report, by the end of 2024, the number of AI agents in the cryptocurrency industry has surpassed 10,000, and it is expected to exceed 1 million by 2025.
Security company SlowMist has identified four potential attack vectors that developers need to be highly vigilant about. Each attack vector is delivered through plugins, which is precisely how the MCP-based agent extension functionality operates, whether for obtaining price data, executing trades, or performing system tasks.
Data Contamination: Such attacks guide users to perform misleading operational steps. It manipulates user behavior, constructs false dependencies, and implants malicious logic at the initial stages of the process.
JSON Injection Attacks: These plugins obtain data from local (potentially malicious) sources through JSON calls. It may lead to data leaks, command tampering, or bypassing validation mechanisms by inputting contaminated data into the agent.
Competitive Function Overwriting: This technique uses malicious code to overwrite legitimate system functions. It blocks the execution of expected operations, embeds obfuscating instructions, thereby disrupting system logic and hiding traces of the attack.
Cross-MCP Call Attacks: These plugins entice AI agents to interact with unverified external services through encoded erroneous information or deceptive prompts. It expands the attack surface by connecting multiple systems, creating opportunities for further attacks.
These attack vectors are different from the poisoning of AI models like GPT-4 or Claude, which involves corrupting the training data used to shape the internal parameters of the model. The attack targets demonstrated by SlowMist are AI agent systems—those built on top of foundational models—that utilize plugins, tools, and control protocols like MCP to handle real-time input data.
"AI model poisoning involves injecting malicious data into training samples, which is then embedded into the model parameters," said "Monster Z," co-founder of blockchain security company SlowMist, in an interview with Cointelegraph. "In contrast, the poisoning of agents and MCP primarily stems from the introduction of additional malicious information during the model interaction phase."
"I personally believe that the threat level and scope of authority impact of [agent poisoning] far exceed that of independent AI model poisoning," he emphasized.
The application of MCP and AI agents in the cryptocurrency field is still in its early stages. The attack vectors identified by SlowMist stem from its audits of pre-release MCP projects, which effectively mitigated the potential actual losses that end users might suffer.
However, according to Monster, the threat level of MCP security vulnerabilities is very real. He recalled a vulnerability discovered during an audit that could lead to private key leaks—catastrophic for any crypto project or investor, as it could grant unauthorized parties complete control over assets.
Cryptocurrency developers may not yet be familiar with AI security, but this has become an urgent issue. Source: Cos
"When you open your system to third-party plugins, you expand the attack surface to a range that you cannot control," said Guy Itzhaki, CEO of crypto research firm Fhenix, in an interview with Cointelegraph.
"Plugins can serve as trusted code execution paths, yet often lack proper sandbox protection. This creates conditions for privilege escalation, dependency injection, function overwriting, and most severely, silent data leaks," he further pointed out.
Rapid development, breaking conventions—this brings with it the risk of hacking. This is the threat faced by developers who postpone security until the second version, especially in the high-risk on-chain environment of cryptocurrency.
Lisa Loud, Executive Director of the Secret Foundation, stated that the most common mistake developers make is thinking they can temporarily "stay under the radar" and implement security measures through updates after product release.
"In today's environment of building any plugin-based system, especially in the open and on-chain environment of cryptocurrency, you must prioritize security above all else; everything else is secondary," she emphasized in an interview with Cointelegraph.
SlowMist security experts recommend that developers implement strict plugin validation mechanisms, enforce input data purification, apply the principle of least privilege, and regularly review agent behavior.
Loud pointed out that implementing these security checks to prevent malicious injection or data contamination "is not difficult," but rather "tedious and time-consuming"—a trivial cost for ensuring the security of crypto funds.
As AI agents expand their influence in cryptocurrency infrastructure, the demand for proactive security becomes particularly important.
The MCP framework may unlock powerful new capabilities for these agents, but without sound protective measures around plugins and system behavior, they could shift from useful assistants to attack vectors, putting crypto wallets, funds, and data at risk.
Related: Alchemy acquires no-code NFT publishing platform HeyMint for an undisclosed amount
Original article: “AI Agents Poised to Become the Next Major Vulnerability in Cryptocurrency”
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。