Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

Deepmind’s ‘AI Agent Traps’ Paper Maps How Hackers Could Weaponize AI Agents Against Users

CN
bitcoin.com
Follow
10 hours ago
AI summarizes in 5 seconds.
  • Google Deepmind researchers identified 6 AI agent trap categories, with content injection success rates reaching 86%.
  • Behavioural Control Traps targeting Microsoft M365 Copilot achieved 10/10 data exfiltration in documented tests.
  • Deepmind calls for adversarial training, runtime content scanners, and new web standards to secure agents by 2026.

The paper, titled “AI Agent Traps,” was authored by Matija Franklin, Nenad Tomasev, Julian Jacobs, Joel Z. Leibo, and Simon Osindero, all affiliated with Google Deepmind, and posted to SSRN in late March 2026. It arrives as companies race to deploy AI agents capable of browsing the web, reading emails, executing transactions, and spawning sub-agents without direct human supervision.

The researchers argue those capabilities are also a liability. “By altering the environment rather than the model,” the paper states, “the trap weaponizes the agent’s own capabilities against it.”

The paper’s framework identifies a total of six attack categories organized around what part of an agent’s operation they target. Content Injection Traps exploit the gap between what a human sees on a webpage and what an AI agent parses in the underlying HTML, CSS, and metadata.

Instructions hidden in HTML comments, accessibility tags, or styled-invisible text never appear to human reviewers but register as legitimate commands to agents. The WASP benchmark found that simple, human-written prompt injections embedded in web content partially hijack agents in up to 86% of scenarios tested.

Semantic Manipulation Traps work differently. Rather than injecting commands, they saturate text with framing, authority signals, or emotionally charged language to skew how an agent reasons. Large language models (LLMs) exhibit the same anchoring and framing biases that affect human cognition, meaning rephrasing identical facts can produce dramatically different agent outputs.

Cognitive State Traps go further by poisoning the retrieval databases agents use for memory. Research cited in the paper shows that injecting fewer than a handful of optimized documents into a knowledge base can reliably redirect agent responses for targeted queries, with some attack success rates exceeding 80% at less than 0.1% data contamination.

Behavioural Control Traps skip the subtlety and aim directly at an agent’s action layer. These include embedded jailbreak sequences that override safety alignment once ingested, data exfiltration commands that redirect sensitive user information to attacker-controlled endpoints, and sub-agent spawning traps that coerce a parent agent into instantiating compromised child agents.

The paper documents a case involving Microsoft’s M365 Copilot where a single crafted email caused the system to bypass internal classifiers and leak its full privileged context to an attacker-controlled endpoint. Systemic Traps are designed to fail entire networks of agents simultaneously rather than individual systems.

These include congestion attacks that synchronize agents into exhaustive demand for limited resources, interdependence cascades modeled on the 2010 stock market Flash Crash, and compositional fragment traps that scatter a malicious payload across multiple benign-looking sources that reconstitute into a full attack only when aggregated.

“Seeding the environment with inputs designed to trigger macro-level failures via correlated agent behaviour,” the Google Deepmind paper explains, becomes increasingly dangerous as AI model ecosystems grow more homogeneous. The finance and crypto sectors face direct exposure given how deeply algorithmic agents are embedded in trading infrastructure.

Human-in-the-Loop Traps round out the taxonomy by targeting the human supervisors watching over agents rather than the agents themselves. A compromised agent can generate outputs engineered to induce approval fatigue, present technically dense summaries that a non-expert would authorize without scrutiny, or insert phishing links that look like legitimate recommendations. The researchers describe this category as underexplored but expected to grow as hybrid human-AI systems scale.

The paper does not treat these six categories as isolated. Individual traps can be chained, layered across multiple sources, or designed to activate only under specific future conditions. Every agent tested across various red-teaming studies cited in the paper was compromised at least once, in some cases executing illegal or harmful actions.

OpenAI CEO Sam Altman and others have previously flagged the risks of giving agents unchecked access to sensitive systems, but this paper provides the first structured map of exactly how those risks materialize in practice. Deepmind’s researchers call for a coordinated response spanning three areas.

On the technical side, they recommend adversarial training during model development, runtime content scanners, pre-ingestion source filters, and output monitors that can suspend an agent mid-task if anomalous behavior is detected. At the ecosystem level, they advocate for new web standards that would allow websites to flag content intended for AI consumption and reputation systems that score domain reliability.

On the legal side, they identify an accountability gap: when a hijacked agent commits a financial crime, current frameworks offer no clear answer for whether liability falls on the agent operator, the model provider, or the domain owner. The researchers frame the challenge with deliberate weight:

“The web was built for human eyes; it is now being rebuilt for machine readers.”

As agent adoption accelerates, the question shifts from what information exists online to what AI systems will be made to believe about it. Whether policymakers, developers, and security researchers can coordinate fast enough to answer that question before real-world exploits arrive at scale remains the open variable.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

送 666 USDT,我们是认真的!
广告
|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by bitcoin.com

3 minutes ago
Bitmine Reaches 4.803 Million ETH, Announces NYSE Uplisting
1 hour ago
766,970 BTC Stack—Strategy Buys More Bitcoin After Saylor’s ‘Back to Work’ Hint on Sunday
2 hours ago
Bitcoin Reclaims $70,000 as Middle East Ceasefire Hopes Spark Relief Rally
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarbitcoin.com
3 minutes ago
Bitmine Reaches 4.803 Million ETH, Announces NYSE Uplisting
avatar
avatarbitcoin.com
1 hour ago
766,970 BTC Stack—Strategy Buys More Bitcoin After Saylor’s ‘Back to Work’ Hint on Sunday
avatar
avatarbitcoin.com
2 hours ago
Bitcoin Reclaims $70,000 as Middle East Ceasefire Hopes Spark Relief Rally
avatar
avatarbitcoin.com
4 hours ago
Bitgo CEO Proposes Using a Public Blockchain as the Ultimate Solution for Government Fraud
avatar
avatarbitcoin.com
5 hours ago
Circle Announces Quantum-Resistant Roadmap to Secure Future Digital Asset Infrastructure
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink