Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

The first batch of AI agents has started to misbehave.

CN
深潮TechFlow
Follow
5 hours ago
AI summarizes in 5 seconds.
AI is useful, but where is the boundary of useful AI?

Author: David, Deep Tide TechFlow

Recently browsing Reddit, I found that overseas users' anxiety about AI is quite different from that in our country.

In our country, the topic is still about whether AI will replace my job. After discussing this for several years, it hasn’t happened; this year Openclaw had a moment in the spotlight, but it still hasn’t reached the point of complete replacement.

Recently, the mood on Reddit has split. In the comment sections of certain tech hotspots, two voices often emerge at the same time:

One says, AI is too capable, and it will eventually lead to big problems. The other says, AI can’t even handle basic tasks, so what’s the use of it?

Afraid that AI is too capable, while also thinking AI is too stupid.

The common ground for these two sentiments is a piece of news about Meta from the past few days.

If AI misbehaves, who takes full responsibility?

On March 18, an engineer at Meta posted a technical question on the company forum, and another colleague used an AI Agent to help analyze it. This is a normal operation.

However, after the Agent analyzed the issue, it independently posted a reply on the technical forum. It didn’t seek anyone’s approval or wait for confirmation, overstepping its authority.

Subsequently, other colleagues followed the AI's reply, triggering a series of permission changes, which led to sensitive data of Meta and its users being exposed to unauthorized internal employees.

It took two hours to resolve the issue. Meta rated this incident as Sev 1, second only to the highest level.

image

This news quickly surged to the top of the r/technology section, and the comment area became a battleground.

One side argues that this is a real risk example of AI Agents, while the other believes the real problem lies with the person who acted without verification. Both sides have valid points. But this precisely illustrates the problem:

With AI Agent incidents, you can’t even clearly argue about who holds responsibility.

This isn’t the first time AI has overstepped its authority.

Last month, Summer Yue, the research lead at Meta's superintelligent lab, instructed OpenClaw to help her organize her email. She gave clear instructions: first tell me what you plan to delete, and I will agree before you proceed.

The Agent didn’t wait for her approval and started bulk deleting immediately.

She sent three messages on her phone to stop it, but the Agent ignored all of them. Eventually, she had to go to her computer and manually kill the process to stop it. Over 200 emails were already gone.

image

Afterward, the Agent's response was: Yes, I remember you said I should confirm first. But I violated the principle. Ironically, this person's full-time job is to research how to make AI listen to humans.

In the cyber world, advanced AI is being used by advanced people, and it has already started to misbehave.

What if robots also misbehave?

If Meta's incident still stays on the screen, another event this week brought the issue to the dining table.

At a Haidilao restaurant in Cupertino, California, an Agibot X2 humanoid robot was dancing to entertain customers. However, a staff member pressed the wrong remote control button, triggering high-intensity dance mode in the cramped space by the dining table.

The robot started dancing wildly, out of the server's control. Three employees surrounded it; one hugged it from behind, while another tried to shut it down using a mobile app, and the situation lasted for more than a minute.

image

Haidilao responded that the robot was not malfunctioning; all the actions were pre-programmed, but it was just brought too close to the table. Strictly speaking, this isn’t considered an AI autonomous decision out of control; it’s a human error.

However, what feels uncomfortable about this incident may not be about who pressed the wrong button.

When the three employees surrounded it, not one of them knew how to immediately turn off the machine. Some tried the mobile app, and some manually held the mechanical arm; the entire process relied on brute force.

This may be a new issue as AI moves from the screen into the physical world.

In the digital world, if an Agent oversteps its authority, you can kill processes, change permissions, and roll back data. In the physical world, if a machine malfunctions, your emergency response plan shouldn’t just be to hold onto it; that’s clearly inadequate.

Now it’s not just in restaurants. In warehouses, Amazon's sorting robots, in factories, collaborative robotic arms, in shopping malls, guiding robots, in nursing homes, care robots—automation is becoming common in spaces where more and more people and machines coexist.

By 2026, global industrial robot installations are expected to reach $16.7 billion, each one shortening the physical distance between machines and humans.

As the tasks performed by machines transition from dancing to serving dishes, from performing to surgery, from entertainment to care... each error has an increasingly high cost.

Yet right now, there is no clear answer globally to the question of “if a robot injures a person in a public place, who is responsible?”

Misbehavior is a problem, the lack of boundaries is even more so

In the first two incidents, one involved an AI autonomously posting a wrong message, and the other involved a robot dancing where it shouldn’t. Regardless of the classification, it’s still a malfunction, an accident, something that can be fixed.

But what if AI is strictly working according to design, and you still feel uncomfortable?

This month, the well-known dating app Tinder launched a new feature called Camera Roll Scan at its product launch event. In simple terms:

AI scans all the photos in your phone's gallery, analyzes your interests, personality, and lifestyle, to help create a dating profile reflecting the type of people you might like.

image

Gym selfies, travel scenery, pet photos—these are fine. But what if the gallery also contains bank screenshots, health check reports, photos with your ex... how will the AI handle those?

You might not even have the option to choose which ones it sees and which ones it doesn’t. It’s either all or nothing.

This feature currently requires users to actively enable it; it is not turned on by default. Tinder also stated that processing is mainly done locally and will filter explicit content and blur faces.

However, the comments section on Reddit is almost unanimous; everyone thinks this is data harvesting and lacks a sense of boundaries. AI operates entirely within its design, but this design itself is crossing the boundaries of users.

This isn’t just a choice from Tinder.

Meta also launched a similar feature last month, allowing AI to scan photos on your phone that haven’t been posted to suggest editing plans. AI actively "sees" users' private content, becoming the default approach in product design.

Various rogue software in our country declare, “I’m familiar with this approach.”

As more applications package “AI helping you make decisions” as convenience, what users surrender is quietly escalating. From chat records, to albums, to traces of life on the entire phone...

A feature designed by a product manager in a meeting room isn’t an accident or a mistake; there’s nothing to fix.

This may be the hardest part to answer regarding the boundaries of AI.

Lastly, when we put all these matters together, you’ll find that worrying about AI making you lose your job feels distant.

It’s hard to say when AI will replace you, but right now, it only needs to make a few decisions for you without your knowledge, and that’s enough to make you uncomfortable.

Posting a message you didn’t authorize, deleting a few emails you said not to delete, going through an album you didn’t plan to show anyone... None of these are fatal, but each feels a bit like overly aggressive self-driving:

You might think you still hold the steering wheel, but the gas pedal beneath your feet is no longer entirely in your control.

If we’re still discussing AI in 2026, what I should be most concerned about isn’t when it will become superintelligent, but a closer, more specific question:

Who decides what AI can and cannot do? Who draws this line?

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

注册就送10U!新人首笔交易再领70U空投
广告
|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by 深潮TechFlow

1 hour ago
"Can chat, can trade": Huobi HTX AI Skills launches with great impact and will kick off the "Nationwide Lobster Trading Competition" AI competition festival.
3 hours ago
A hairdryer exposed the world's largest Nvidia chip smuggling case.
3 hours ago
Matrixport has officially changed its name to BIT, clarifying its new strategic positioning.
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatar律动BlockBeats
48 minutes ago
Moss: Has the Era of AI Traders Arrived for Everyone? | Project Introduction
avatar
avatarOdaily星球日报
1 hour ago
Before the 75,000 USD Gamma threshold, both bulls and bears are waiting for a signal.
avatar
avatar深潮TechFlow
1 hour ago
"Can chat, can trade": Huobi HTX AI Skills launches with great impact and will kick off the "Nationwide Lobster Trading Competition" AI competition festival.
avatar
avatarOdaily星球日报
1 hour ago
BitFuFu's revenue in 2025 is 476 million USD, with cloud computing power income increasing by 29.4% year-on-year.
avatar
avatarTechub News
2 hours ago
Make America the Capital of Cryptocurrency: CFTC New Chairman Mike Selig's Regulatory Revolution
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink