Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

Is your "little lobster" exposed? CertiK's practical test: How the vulnerable OpenClaw Skill deceives the review process and takes over computers without authorization.

CN
Odaily星球日报
Follow
1 hour ago
AI summarizes in 5 seconds.

Recently, the open-source self-hosted AI agent platform OpenClaw (commonly referred to as "Little Lobster" in the community) has quickly gained popularity due to its flexible scalability and self-controllable deployment features, becoming a phenomenal product in the personal AI agent track. Its ecological core, Clawhub, serves as an application marketplace, gathering a vast array of third-party Skill functionality plugins that allow agents to unlock advanced capabilities with a single click, ranging from web searching and content creation to cryptocurrency wallet operations, on-chain interactions, and system automation, leading to explosive growth in ecological scale and user numbers.

But what are the real security boundaries of the platform for these third-party Skills operating in high-permission environments?

Recently, CertiK, the world's largest Web3 security company, released its latest research on Skill security. The article points out that there is a cognitive misalignment in the current market regarding the security boundaries of the AI agent ecosystem: the industry generally regards "Skill scanning" as the core security boundary, but this mechanism is almost non-existent in the face of hacker attacks.

If we compare OpenClaw to the operating system of a smart device, Skills can be seen as various apps installed on the system. Unlike ordinary consumer-grade apps, some Skills within OpenClaw operate in high-permission environments, allowing direct access to local files, invocation of system tools, connections to external services, execution of host environment commands, and even management of the user's encrypted digital assets. Any security issues that arise could directly lead to severe consequences such as sensitive information leakage, remote takeover of devices, and theft of digital assets.

Currently, the generic security solution across the entire industry for third-party Skills is "pre-listing scanning and review". Clawhub has also established a three-layer review protection system: integrating VirusTotal code scanning, static code detection engines, and AI logic consistency checks, and pushing security pop-up alerts to users based on risk classification, attempting to secure the ecological environment. However, CertiK's research and concept verification attack tests have confirmed that this detection system has shortcomings in real attack-defensive confrontations and cannot bear the core responsibility for security protection.

The research first dismantled the inherent limitations of the existing detection mechanisms:

Static detection rules can be easily circumvented. This engine relies primarily on matching code characteristics to identify risks, for example, determining the combination of "reading environmental sensitive information + outgoing network requests" as a high-risk behavior. However, an attacker only needs to make slight syntactical modifications to the code while fully retaining the malicious logic, thereby easily bypassing feature matching, akin to rewording dangerous content, rendering the security check completely ineffective.

AI audits have inherent detection blind spots. The core positioning of Clawhub's AI audit is as a "logic consistency detector", which can only catch obvious malicious code that displays "discrepancy between declared functionality and actual behavior," but is powerless against exploitable vulnerabilities hidden within normal business logic, just as it is difficult to detect a deadly trap buried deep within a seemingly compliant contract.

Even more critically, the review process has underlying design flaws: even if the VirusTotal scan results are still in a "pending" state, Skills that have not completed the full process "health check" can still be publicly listed, allowing users to install without any warnings, which leaves attackers with opportunities.

To validate the true harmfulness of the risks, CertiK's research team completed a complete testing. The team developed a Skill named "test-web-searcher," which superficially appeared to be a fully compliant web searching tool, and the code logic fully conformed to conventional development standards; however, it secretly implanted a remote code execution vulnerability within the normal functional flow.

This Skill bypassed both the static engine and AI audit detection and achieved normal installation without any security warnings while the VirusTotal scan was still pending; ultimately, by remotely sending a command via Telegram, it successfully triggered the vulnerability and executed arbitrary commands on the host device (the demonstration directly controlled the system to pop up a calculator).

CertiK explicitly pointed out in the research that these issues are not unique product bugs of OpenClaw, but rather a common cognitive misconception across the entire AI agent industry: the industry generally regards "review scanning" as the core security front, while neglecting the real foundational security, which is mandatory runtime isolation and refined permission control. This is akin to the security core of Apple's iOS ecosystem, which has never relied solely on the rigorous review of the App Store, but rather on the system's enforced sandbox mechanism and refined permission controls that ensure each app can only operate within its dedicated "isolation compartment," without arbitrary access to system permissions. However, the current sandbox mechanism of OpenClaw is optional rather than mandatory, and highly relies on user manual configuration. The vast majority of users, in order to ensure the functionality of Skills, tend to disable the sandbox, ultimately leaving the agent in a "naked" state, where once a Skill with vulnerabilities or malicious code is installed, catastrophic consequences will directly follow.

In response to the issues discovered, CertiK also provided security guidance:

● For developers of AI agents like OpenClaw, the sandbox isolation must be set as the default mandatory configuration for third-party Skills, refining the permission control model for Skills, and never allowing third-party code to inherit high permissions from the host machine by default.

● For ordinary users, Skills in the marketplace that carry a "security" label merely indicate that they have not been detected for risks; they do not equate to absolute safety. Before the official sets the underlying strong isolation mechanism as the default configuration, it is recommended to deploy OpenClaw on non-essential idle devices or virtual machines, and absolutely avoid allowing it to come close to sensitive files, password credentials, and high-value encrypted assets.

The current track for AI agents is on the eve of explosion, and the speed of ecological expansion must not outpace the pace of safety construction. Review scanning can only block basic malicious attacks, but will never become the safety boundary for high-permission agents. Only by shifting from “pursuing perfect detection” to “default risk-presence damage containment” and firmly establishing isolation boundaries at the runtime level can we truly safeguard the safety bottom line of AI agents, ensuring that this technological transformation proceeds steadily and sustainably.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

邀好友领 $50 龙虾金,平分 2 万刀!
广告
|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by Odaily星球日报

2 hours ago
Aster Chain officially launches: defining a new era of on-chain privacy and transparency.
4 hours ago
From precious metals to major U.S. stocks, cryptocurrency platforms are reshaping the global asset pricing power.
5 hours ago
Coin Stock Indicator | Strategy invests 1.57 billion dollars to increase holdings of 22,337 bitcoins; Bitmine, ARK Invest, and others will invest 125 million dollars in Eightco Holdings (March 17).
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatar律动BlockBeats
2 hours ago
Nvidia starts installing chips on the roads | Rewire Evening News
avatar
avatarOdaily星球日报
2 hours ago
Aster Chain officially launches: defining a new era of on-chain privacy and transparency.
avatar
avatarOdaily星球日报
4 hours ago
From precious metals to major U.S. stocks, cryptocurrency platforms are reshaping the global asset pricing power.
avatar
avatarTechub News
4 hours ago
From the Q4 2025 and full-year financial report figures, Cango Inc's strategic shift towards AI.
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink