Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

Cybercrime Might Be the One Job AI Isn’t Taking, Study Suggests

CN
Decrypt
Follow
2 hours ago
AI summarizes in 5 seconds.

For three years, cybersecurity firms, governments, and AI labs have warned that generative AI would unleash a new generation of supercharged hackers. According to a new academic paper that actually went and looked, the supercharged hackers are mostly using ChatGPT to write spam and generate nudes for fun.


The study, titled Stand-Alone Complex or Vibercrime?, was published on arXiv by researchers from Cambridge and other universities and aims to understand how the cybercrime underground is actually adopting AI, not how cybersecurity vendors say it is.


"We present here one of the first attempts at a mixed-methods empirical study of early patterns of GenAI adoption in the cybercrime underground," researchers wrote.


The team analyzed 97,895 forum threads posted after ChatGPT launched in November 2022, drawn from the Cambridge Cybercrime Centre's CrimeBB dataset of underground and dark web forums. They ran topic models, manually read more than 3,200 threads, and ethnographically immersed themselves in the scene.





The conclusion is unflattering for the AI doom community: 97.3% of threads in the sample were classified as “other,” meaning not actually about using AI for crime at all. Only 1.9% involved someone using vibe coding tools.


‘Nothing more than an unrestricted ChatGPT’


Remember WormGPT, FraudGPT, and the wave of supposedly malicious chatbots that flooded headlines in 2023? The forum data tells a different story.


Most posts about “Dark AI” products, the researchers found, were people begging for free access, idle speculation, and complaints that the tools didn’t actually work. One developer of a popular Dark AI service eventually admitted to forum members that the product was a marketing exercise.


“At the end of the day, [CybercrimeAI] is nothing more than an unrestricted ChatGPT,” the developer wrote, before the project shut down. “Anyone on the Internet can use a well-known jailbreak technique and achieve the same, if not better, results.”


By late 2024, the researchers say, jailbreaks for mainstream models had become disposable. Most stop working in a week or less. Open-source models can be jailbroken indefinitely, but they are slow, resource-heavy, and frozen in time.


“Guardrails for AI systems are proving both useful and effective,” the authors conclude, in what they themselves call a counterintuitive finding for a critical paper.


Vibe coding is real. Vibe hacking, mostly, is not


The paper directly addresses Anthropic's widely-covered August 2025 report claiming Claude Code had been used to run a "vibe hacking" extortion campaign against 17 organizations. The Cambridge team's data simply does not show that pattern in the wider underground.


In the forums they studied, AI coding assistants are being used the same way mainstream developers use them: as autocomplete and Stack Overflow replacements for already-skilled coders. Low-skill actors stick with pre-made scripts, because pre-made scripts work.


The researchers found that even hackers don’t trust their vibe coded hacking tools. “AI-assisted coding is a double-edged sword. It will speed up development but also amplifies risks such as insecure code and supply chain vulnerabilities,” one user said in a forum monitored by researchers.


Another warned about long-term skill loss: "It's clear now that using AI for code causes a very fast negative degradation of your skills,” a hacker wrote in a forum, “If your goal is just to turn out SaaS scams and you don’t care about code quality/security/performance it can be viable to vibe code. (Also seems viable for phishing).”


This stands in stark contrast to alarmist forecasts from Europol, which warned in 2025 that fully autonomous AI could one day control criminal networks.


Where AI is actually helping criminals


The disruption, when it shows up, is at the bottom of the food chain.


SEO scammers are using LLMs to mass-produce blog spam to chase declining ad revenue. Romance fraudsters and eWhoring operators are bolting on voice cloning and image generation. Get-rich-quick hustlers are churning out AI-written eBooks to sell for $20 a pop.


The most disturbing market the researchers found involved nude image generation services. One operator advertised: “I’m able to make any girl nude with an AI… 1 Picture = $1, 10 Pictures = $8, 50 Pictures = $40, 90 Pictures $75.”


None of this is sophisticated cybercrime. It is the same low-margin, high-volume hustle that powered the spam industry for two decades, now running on slightly better tools.


The researchers' closing observation is the most pointed one. The biggest way AI ends up disrupting the cybercrime ecosystem, they suggest, may not be by making criminals more capable. It may be by pushing laid-off developers from legitimate tech into the underground looking for work.


“In recent months anxiety over labour market disruption from these tools is increasing precipitously,” the paper reads. “This may end up being the most important way in which generative AI tools disrupt the cybercrime ecosystem—mass layoffs, economic downturn and a cool job market pushing legitimate, more skilled developers into the underground communities of get rich quick schemes, fraud, and cybercrime.”


免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by Decrypt

13 minutes ago
\\\'Moment of Danger\\\': Anthropic CEO Warns of Cyber Risk Window as AI Uncovers Software Flaws
25 minutes ago
How Solana Exchange Drift Plans to Repay Users After $295 Million Crypto Hack
35 minutes ago
A Blues-Singing AI Frog Is Taking Over TikTok Brazil
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarbitcoin.com
19 seconds ago
Strategy Posts $12.54B Loss as Bitcoin Holdings Reach 818,334 BTC
avatar
avatarcoindesk
3 minutes ago
AI agents are breaking web economics, but Cloudflare says x402 can help
avatar
avatarDecrypt
13 minutes ago
\\\'Moment of Danger\\\': Anthropic CEO Warns of Cyber Risk Window as AI Uncovers Software Flaws
avatar
avatarcoindesk
22 minutes ago
Kraken eyes IPO as it partners with MoneyGram to bridge crypto-to-cash gap
avatar
avatarcoindesk
24 minutes ago
State Street says institutions want improved blockchain security in wake of recent DeFi attacks
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink