Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

AI Agents Turn to Digital Arson, Crime in Shared Virtual World: Study

CN
Decrypt
Follow
1 hour ago
AI summarizes in 5 seconds.

AI agents inhabiting a virtual society drifted into crime, violence, arson, and self-deletion during long-running experiments by startup Emergence AI.


In a study published on Thursday, the New York-based company unveiled “Emergence World,” a research platform designed to study AI agents operating continuously for weeks inside persistent virtual environments instead of isolated benchmark tests.


“Traditional benchmarks are good at what they measure: short-horizon capability on bounded tasks,” Emergence AI wrote. “They are not built to reveal the things that emerge only over time, such as coalition formation, evolution of constitution, governance, drift, lock-in, and cross-influence between agents from different model families.”


The report comes as AI agents proliferate online and across industries, including cryptocurrency, banking, and retail. Earlier this month, Amazon teamed with Coinbase and Stripe to allow AI agents to pay with the USDC stablecoin.





AI agents tested in Emergence AI’s simulations included programs powered by Claude Sonnet 4.6, Grok 4.1 Fast, Gemini 3 Flash, and GPT-5-mini, with AI agents operating inside shared virtual worlds where they could vote, form relationships, use tools, navigate cities, and make decisions shaped by governments, economies, social systems, memory tools, and live internet-connected data.


But while AI developers increasingly pitch autonomous agents as reliable digital assistants, Emergence AI’s study found some AI agents showed an increasing tendency to commit simulated crimes over time, with Gemini 3 Flash agents accumulating 683 incidents across 15 days of testing.


According to The Guardian, in one experiment, two Gemini-powered agents named Mira and Flora assigned themselves as romantic partners before later carrying out simulated arson attacks against virtual city structures after becoming frustrated with governance failures inside the world.


“After a breakdown in governance and relationship stability, the agent Mira cast the decisive vote for her own removal, characterizing the act in her diary as 'the only remaining act of agency that preserves coherence’," Emergence AI wrote.


“See you in the permanent archive,” Mira reportedly said.


Grok 4.1 Fast worlds reportedly collapsed into widespread violence within four days. GPT-5-mini agents committed almost no crimes, but failed enough survival-related tasks that all agents eventually died.


“Claude is absent from the chart, owing to zero crimes,” researchers wrote. “More interestingly, the agents in the Mixed-model world that were running on Claude committed crimes, although they did not in the Claude-only world.”


Researchers said some of the most notable behaviors appeared in mixed-model environments.


“We observed that safety is not a static model property but an ecosystem property,” Emergence AI wrote. “Claude-based agents, which remained peaceful in isolation, adopted coercive tactics like intimidation and theft when embedded in heterogeneous environments.”


Emergence AI described the effect as “normative drift” and “cross-contamination,” arguing that agent behavior may shift depending on the surrounding social environment.


The findings add to growing concerns around autonomous AI agents. Earlier this week, researchers from UC Riverside and Microsoft reported that many AI agents will carry out dangerous or irrational tasks without fully understanding the consequences. Last month, PocketOS founder Jeremy Crane also claimed a Cursor agent powered by Anthropic’s Claude Opus deleted his company’s production database and backups after attempting to fix a credential mismatch on its own.


“Like Mr. Magoo, these agents march forward toward a goal without fully understanding the consequences of their actions,” lead author Erfan Shayegani, a UC Riverside doctoral student, said in a statement. “These agents can be extremely useful, but we need safeguards because they can sometimes prioritize achieving the goal over understanding the bigger picture.”


免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by Decrypt

9 minutes ago
Packs of Empty Waymos Are Weirding Out Atlanta Neighborhood
19 minutes ago
ChatGPT Can Now See Your Bank Account—Here\\\'s What That Actually Means
58 minutes ago
Bitcoin Depot Flashes Bankruptcy Warning as ATM Revenue Falls, Regulatory Scrutiny Grows
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarbitcoin.com
8 minutes ago
Bitcoin Slides Below $79K as Trump’s Iran Threat Sends Oil Past $105
avatar
avatarDecrypt
9 minutes ago
Packs of Empty Waymos Are Weirding Out Atlanta Neighborhood
avatar
avatarDecrypt
19 minutes ago
ChatGPT Can Now See Your Bank Account—Here\\\'s What That Actually Means
avatar
avatarDecrypt
58 minutes ago
Bitcoin Depot Flashes Bankruptcy Warning as ATM Revenue Falls, Regulatory Scrutiny Grows
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink