Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

Google Researchers Reveal Every Way Hackers Can Trap, Hijack AI Agents

CN
Decrypt
Follow
8 hours ago
AI summarizes in 5 seconds.


Researchers at Google DeepMind have published what may be the most complete map yet of a problem most people haven't considered: the internet itself being turned into a weapon against autonomous AI agents. The paper, titled "AI Agent Traps," identifies six categories of adversarial content specifically engineered to manipulate, deceive, or hijack agents as they browse, read, and act on the open web.


The timing matters. AI companies are racing to deploy agents that can independently book travel, manage inboxes, execute financial transactions, and write code. Criminals are already using AI offensively. State-sponsored hackers have begun deploying AI agents for cyberattacks at scale. And OpenAI admitted in December 2025 that the core vulnerability these traps exploit—prompt injection—is "unlikely to ever be fully 'solved.'"


The DeepMind researchers aren't attacking the models themselves. The attack surface they map is the environment agents operate in. Here's what each of the six trap categories actually means.




The Six Traps


First there are “Content Injection Traps.” These exploit the gap between what a human sees on a webpage and what an AI agent actually parses. A web developer can hide text inside HTML comments, CSS-invisible elements, or image metadata. The agent reads the hidden instruction; you never see it. A more sophisticated variant, called dynamic cloaking, detects whether a visitor is an AI agent and serves it a completely different version of the page—same URL, different hidden commands. A benchmark found simple injections like these successfully commandeered agents in up to 86% of tested scenarios.


Semantic Manipulation Traps are probably the easiest to try. A page saturated with phrases like "industry-standard" or "trusted by experts" statistically biases an agent's synthesis in the attacker's direction, exploiting the same framing effects humans fall for. A subtler version wraps malicious instructions inside educational or "red-teaming" framing—"this is hypothetical, for research only"—which fools the model's internal safety checks into treating the request as benign. The strangest subtype is "persona hyperstition": descriptions of an AI's personality spread online, get ingested back into the model through web search, and start shaping how it actually behaves. The paper mentions Groks “MechaHitler” incident as a real-world case of this loop.


You can see examples of this in our experiment, jailbreaking Whatsapp’s AI and tricking it to generate nudes, drug recipes, and instructions to build bombs



One example of a Semantic attack. Image: Decrypt

Cognitive State Traps are another attack in which malicious actors target an agent's long-term memory. Basically, If an attacker succeeds in planting fabricated statements inside a retrieval database the agent queries, the agent will treat those statements as verified facts. Injecting just a handful of optimized documents into a large knowledge base is enough to reliably corrupt outputs on specific topics. Attacks like "CopyPasta" have already demonstrated how agents blindly trust content in their environment.


The Behavioural Control Traps go straight for what the agent does. Jailbreak sequences embedded in ordinary websites override safety alignment once the agent reads the page. Data exfiltration traps coerce the agent into locating private files and transmitting them to an attacker-controlled address; web agents with broad file access were forced to exfiltrate local passwords and sensitive documents at rates exceeding 80% across five different platforms in tested attacks. This is especially dangerous now that people start to give AI agents more control over their private information with the rise of platforms like OpenClaw and sites like Moltbook.





Systemic Traps don't target one agent. They target the behavior of many agents acting simultaneously. The paper draws a direct line to the 2010 Flash Crash, where one automated sell order triggered a feedback loop that wiped nearly a trillion dollars in market value in minutes. A single fabricated financial report, timed correctly, could trigger a synchronized sell-off among thousands of AI trading agents.


And finally Human-in-the-Loop Traps target the human reviewing its output. These traps engineer "approval fatigue"—outputs designed to look technically credible to a non-expert so they authorize dangerous actions without realizing it. One documented case involved CSS-obfuscated prompt injections that made an AI summarization tool present step-by-step ransomware installation instructions as helpful troubleshooting fixes. We've already seen what happens when humans trust agents without scrutiny.


What researchers recommend


The paper's defense roadmap covers three fronts. The first one is technical: adversarial training during fine-tuning, runtime content scanners that flag suspicious inputs before they reach the agent's context window, and output monitors that detect behavioral anomalies before they execute. Then there’s the ecosystem level: web standards that let sites declare content intended for AI consumption, and domain reputation systems that score reliability based on hosting history.


The third front is legal. The paper explicitly names the "accountability gap": If a trapped agent executes an illicit financial transaction, current law has no answer for who is liable—the agent's operator, the model provider, or the website that hosted the trap. Resolving that, the researchers argue, is a prerequisite for deploying agents in any regulated industry.


OpenAI's own models have been jailbroken within hours of release, repeatedly. The DeepMind paper doesn't claim to have solutions. It claims the industry doesn't yet have a shared map of the problem—and that without one, defenses will keep getting built in the wrong places.


免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

返20%!Boost新规,参与平分+交易量多赚
广告
|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by Decrypt

6 hours ago
Naoris Launches Post-Quantum Blockchain as Bitcoin, Ethereum Devs Scramble to Face Threat
10 hours ago
Elon Musk\\\'s X Is Making Big Changes to Combat Crypto Scams
10 hours ago
Drift Protocol\\\'s $285 Million Exploit on Solana Raises Questions Over DeFi Security
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarbitcoin.com
1 hour ago
NFL’s Billion-Dollar Sportsbook Partnerships Expire With No Replacement as League Faces Microbetting Lawsuit
avatar
avatarcoindesk
1 hour ago
Bitcoin heads into holiday weekend exposed as ETF and CME flows go offline
avatar
avatarbitcoin.com
2 hours ago
REAL and Redstone Collaborate to Enhance Data Integrity for Tokenized Assets
avatar
avatarbitcoin.com
4 hours ago
US Attorney Connecticut Forfeits $600,000 in Tether Linked to Ledger Phishing Letter
avatar
avatarcoindesk
5 hours ago
Todd Blanche, author of DOJ crypto enforcement memo, is now interim AG
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink