Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

China State-Backed Hackers Used AI To Launch First Massive Cyberattack: Anthropic

CN
Decrypt
Follow
4 months ago
AI summarizes in 5 seconds.

Anthropic said Thursday it had disrupted what it called the first large-scale cyber-espionage operation driven largely by AI, underscoring how rapidly advanced agents are reshaping the threat landscape.


In a blog post, Anthropic said a Chinese state-sponsored group used its Claude Code, a version of Claude AI that runs in a terminal, to launch intrusion operations at a speed and scale that would have been impossible for human hackers to match.


“This case validates what we publicly shared in late September," an Anthropic spokesperson told Decrypt. "We’re at an inflection point where AI is meaningfully changing what’s possible for both attackers and defenders.”





The spokesperson added that the attack “likely reflects how threat actors are adapting their operations across frontier AI models, moving from AI as advisor to AI as operator.”


“The attackers used AI’s ‘agentic’ capabilities to an unprecedented degree—using AI not just as an advisor, but to execute the cyberattacks themselves,” the company wrote in its post.


Large tech companies, financial institutions, chemical manufacturing companies, and government agencies were targeted, Anthropic said, with the attack carried out by a group the company labeled GTG-1002.


How it happened


According to the investigation, the attackers coaxed Claude into performing technical tasks within targeted systems by framing the work as routine for a legitimate cybersecurity firm.


Once the model accepted the instructions, it performed most of the steps in the intrusion lifecycle on its own.


While it did not specify which companies were targeted, Anthropic said 30 were targeted, and that a small number of those attacks succeeded.


The report also documented cases in which the compromised Claude mapped internal networks, located high-value databases, generated exploit code, established backdoor accounts, and pulled sensitive information with little direct oversight.


The goal of the operations appears to have been intelligence collection, focusing on extracting user credentials, system configurations, and sensitive operational data, which are common objectives in espionage.


“We’re sharing this case publicly to help those in industry, government, and the wider research community strengthen their own cyber defenses,” the spokesperson said.


Anthropic said the AI attack had “substantial implications for cybersecurity in the age of AI agents.”


“There’s no fix to 100% avoid jailbreaks. It will be a continuous fight between attackers and defenders,” Professor of Computer Science at USC and co-founder of Sahara AI, Sean Ren, told Decrypt. “Most top model companies like OpenAI and Anthropic invested major efforts in building in-house red teams and AI safety teams to improve model safety from malicious uses.”


Ren pointed to AI becoming more mainstream and capable as key factors allowing bad actors to engineer AI-driven cyberattacks.


The attackers, unlike earlier “vibe hacking” attacks that relied on human direction, were able to use AI to perform 80-90% of the campaign, with human intervention required only sporadically, the report said. For once, AI hallucinations mitigated the harm.


“Claude didn’t always work perfectly. It occasionally hallucinated credentials or claimed to have extracted secret information that was in fact publicly available,” Anthropic wrote. “This remains an obstacle to fully autonomous cyberattacks.”


Anthropic said it had expanded detection tools, strengthened cyber-focused classifiers, and begun testing new methods to spot autonomous attacks earlier. The company also said it released its findings to help security teams, governments, and researchers prepare for similar cases as AI systems become more capable.


Ren said that while AI can do great damage, it can also be harnessed to protect computer systems: “With the scale and automation of cyberattacks advancing through AI, we have to leverage AI to build alert and defense systems.”


免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

送 666 USDT,我们是认真的!
广告
|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by Decrypt

19 minutes ago
Bitcoin Miner Riot Platforms Sells Over $250 Million Worth of BTC
42 minutes ago
Ethereum Foundation Stakes $93M Worth of ETH, Nears Strategic Target
3 hours ago
Cambodia Advances Law Targeting Crypto Scam Compound Kingpins with Life in Jail
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarDecrypt
19 minutes ago
Bitcoin Miner Riot Platforms Sells Over $250 Million Worth of BTC
avatar
avatarDecrypt
42 minutes ago
Ethereum Foundation Stakes $93M Worth of ETH, Nears Strategic Target
avatar
avatarbitcoin.com
53 minutes ago
Drift Protocol Hack 2026: What Happened, Who Lost Money, and What’s Next
avatar
avatarbitcoin.com
1 hour ago
ViaBTC Showcases Collateral-Pledged Loan Solutions to Navigate Diverse Market Conditions
avatar
avatarcoindesk
1 hour ago
CoinDesk 20 performance update: Bitcoin (BTC) trades flat while altcoins rise
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink