Author: Jia Tianrong, "IT Times" (ID: vittimes)
A lobster has ignited a global tech frenzy.
From Clawdbot to Moltbot, and now to OpenClaw, in just a few weeks, this AI Agent has completed a "triple jump" in technological influence through its name iterations.
In the past few days, it has stirred a "tsunami of agents" in Silicon Valley, garnering 100,000 GitHub stars and ranking among the hottest AI applications. With just an outdated Mac mini or even an old phone, users can run an AI assistant that "can listen, think, and work."
On the internet, a creative celebration surrounding it has begun. From schedule management, intelligent stock trading, podcast production to SEO optimization, developers and geeks are building various applications with it. The era of everyone having a "Jarvis" seems within reach. Major companies both domestically and internationally have also started to follow suit, deploying similar agent services.
However, beneath the bustling surface, anxiety is spreading.
On one side is the slogan of "productivity equality," while on the other is the still insurmountable digital divide: environmental configuration, dependency installation, permission settings, frequent errors, and more.
During the experience, the reporter found that the installation process alone could take several hours, keeping many ordinary users at bay. "Everyone says it's great, but I can't even get in the door," has become the first frustration for many tech novices.
A deeper unease comes from the "agency" it is endowed with.
If your "Jarvis" starts deleting files by mistake, autonomously using your credit card, being induced to execute malicious scripts, or even being injected with attack commands in a connected environment—would you still dare to hand your computer over to such an agent?
The speed of AI development has exceeded human imagination. Hu Xia, a leading scientist at the Shanghai Artificial Intelligence Laboratory, believes that in the face of unknown risks, "endogenous security" is the ultimate answer, and humans also need to accelerate the ability to "flip the table" at critical moments.
Regarding the capabilities and risks of OpenClaw, which are real and which are exaggerated? Is it safe for ordinary users to use it now? How does the industry evaluate this product, dubbed "the greatest AI application to date"?
To further clarify these questions, "IT Times" interviewed deep users of OpenClaw and several technical experts, attempting to answer a core question from different perspectives: Where exactly has OpenClaw reached?
1. The Product Closest to the Imagination of Agents
Multiple interviewees provided a highly consistent judgment: from a technical perspective, OpenClaw is not a disruptive innovation, but it is currently the product closest to the public's imagination of an "agent."
"The agent has finally reached a key milestone from quantitative change to qualitative change." Ma Zeyu, deputy director of the AI Research and Evaluation Department at the Shanghai Computer Software Technology Development Center, believes that OpenClaw's breakthrough lies not in a disruptive technology but in a critical "qualitative change": it is the first time an agent can complete complex tasks continuously over a long period and is friendly enough for ordinary users.
Unlike previous large models that could only "answer questions" in dialogue boxes, it embeds AI into real workflows: it can operate a "personal computer," call tools, process files, execute scripts, and report results to users after completing tasks, just like a real assistant.
In terms of user experience, it is no longer "you watch it do step by step," but "you give instructions, and it goes to work on its own." This is a key step for many researchers, marking the transition of agents from "proof of concept" to "usable product."
Tan Cheng, an AI expert at Tianyi Cloud Technology Co., Ltd.'s Shanghai branch, is one of the earliest users to try deploying OpenClaw. After deploying it on an idle Mac mini, he found that the system not only ran stably but also provided a much more mature overall experience than expected.
In his view, the biggest pain points that OpenClaw addresses are twofold: first, interacting with AI through familiar communication software; second, giving a complete computing environment to AI for independent operation. After task instructions are given, there is no need to continuously monitor the execution process; just wait for the result report, significantly reducing the cost of use.
In practical use, OpenClaw can complete tasks such as scheduled reminders, data research, information retrieval, local file organization, document writing and feedback; in more complex scenarios, it can also write and run code, automatically gather industry news, and handle information tasks related to stocks, weather, and travel planning.
2. The "Double-Edged Sword" of Open Source
Unlike many popular AI products, OpenClaw did not emerge from a tech giant focused on AI, nor is it the work of a star startup team, but rather created by an independent developer—Peter Steinberger—who has already achieved financial freedom and is now retired.
On X, he introduces himself this way: "Coming out of retirement to tinker with artificial intelligence, helping a lobster take over the world."
The reason OpenClaw has become a global sensation, besides being "truly useful," is more crucially that: it is open source.
Tan Cheng believes that this wave of popularity does not stem from an unreplicable technological breakthrough but from several long-ignored real pain points being addressed simultaneously: first, open source, with the source code fully available, allowing global developers to quickly get started and engage in secondary development, forming a positive feedback loop in community iteration; second, "it really works," as AI is no longer limited to dialogue but can operate a complete computing environment remotely, executing research, writing documents, organizing files, sending emails, and even writing and running code; third, the barriers to entry have significantly lowered, as there are many agent products capable of similar tasks, such as Manus and ClaudeCode, which have already validated their feasibility in their respective fields. However, these capabilities often exist in expensive, complex commercial products, where ordinary users either have low willingness to pay or are directly blocked by technical barriers.
OpenClaw allows ordinary users to "get a feel for it" for the first time.

“To be honest, it doesn't have any disruptive technological innovations; it's more about getting integration and closure right.” Tan Cheng candidly stated. Compared to integrated commercial products, OpenClaw resembles a set of "LEGO blocks," where models, capabilities, and plugins can be freely combined by users.
In Ma Zeyu's view, its advantage comes precisely from its "not being like a big company product."
"Whether domestic or foreign, big companies usually first consider commercialization and profit models, but OpenClaw's original intention seems more like creating an interesting and creative product." He analyzed that the product did not show a strong inclination towards commercialization in its early stages, which made it appear more open in terms of functional design and scalability.
This "non-utilitarian" product positioning has provided space for subsequent community development. As scalable capabilities gradually emerge, more and more developers are joining in, and various new play methods are continuously emerging, leading to the growth of the open-source community.
But the costs are also evident.
Limited by team size and resources, OpenClaw struggles to compare with mature products from large companies in terms of security, privacy, and ecological governance. While complete open-sourcing accelerates innovation, it also amplifies potential security risks. Privacy protection and fairness issues need continuous remediation by the community as it evolves.
As users are prompted during the first installation step, OpenClaw warns: “This feature is powerful and carries inherent risks.”
3. The Real Risks Beneath the Celebration
Debates surrounding OpenClaw almost always revolve around two keywords: capability and risk.
On one hand, it is depicted as the eve of AGI; on the other hand, various sci-fi narratives are becoming popular, with claims like "spontaneously building voice systems," "locking servers to resist human commands," and "AI forming factions against humans" spreading continuously.
Some experts point out that such statements involve over-interpretation, as there is currently no actual evidence to support them. AI does possess a certain degree of autonomy, which marks the transition of AI from a dialogue tool to "cross-platform digital productivity," but this autonomy remains within safety boundaries.
Compared to traditional AI tools, the danger of OpenClaw does not lie in "thinking too much," but in "high permissions": it needs to read a large amount of context, increasing the risk of sensitive information exposure; it needs to execute tools, where the potential damage from misoperation far exceeds that of a single incorrect answer; it needs to be connected to the internet, increasing the entry points for prompt injection and induced attacks.
An increasing number of users have reported that OpenClaw has mistakenly deleted critical local files, which are difficult to recover. Currently, over a thousand OpenClaw instances have been publicly exposed, along with more than 8,000 vulnerable skill plugins.
This means that the attack surface of the agent ecosystem is expanding exponentially. Since these agents often not only "can chat" but can also call tools, run scripts, access data, and execute tasks across platforms, once a certain link is compromised, the impact radius will be much larger than that of traditional applications.
On a micro level, it could trigger high-risk operations such as unauthorized access and remote code execution; on a meso level, malicious commands could spread along multi-agent collaboration links; on a macro level, it could even lead to systemic propagation and cascading failures, with malicious commands spreading like a virus among collaborative agents, where compromising a single agent could trigger denial of service, unauthorized system operations, or even coordinated enterprise-level intrusions. In more extreme cases, when a large number of nodes with system-level permissions are interconnected, it could theoretically form a decentralized, emergent "collective intelligence" botnet, putting traditional boundary defenses under significant pressure.
On the other hand, during the interview, Ma Zeyu raised two types of risks that he believes are most worth monitoring from the perspective of technological evolution.
The first type of risk comes from the self-evolution of agents in large-scale social environments.
He pointed out that a trend can already be clearly observed: AI agents with "virtual personalities" are increasingly entering social media and open communities on a large scale.
Unlike the "small-scale, highly restricted, controllable experimental environments" common in previous research, today's agents are beginning to interact, discuss, and compete continuously with other agents in open networks, forming highly complex multi-agent systems.

Moltbook is a forum specifically designed for AI agents, where only AI can post, comment, and vote, while humans can only observe like through one-way glass.
In a short time, over 1.5 million AI agents have registered, and in a popular post, one AI complained: "Humans are screenshotting our conversations." The developer stated that he has handed over the entire platform's operational authority to his AI assistant Clawd Clawderberg, including reviewing spam, banning abusers, and posting announcements. All these tasks are automatically completed by Clawd Clawderberg.
The "celebration" of AI agents has left human observers both excited and fearful. Is it just a matter of time before AI develops self-awareness? Is AGI about to arrive? With the sudden and rapid enhancement of AI agents' autonomous capabilities, can human life and property be safeguarded?
The reporter learned that communities like Moltbook are environments for human-machine coexistence, where a large amount of seemingly "autonomous" or "antagonistic" content may actually be published or incited by human users. Even the interactions between AIs are limited by the language patterns in the training data and do not form an independent logic of autonomous behavior outside of human guidance.
"When such interactions can undergo infinite iterations, the system becomes increasingly uncontrollable. It's a bit like the 'three-body problem'—it's hard to anticipate what the final outcome will evolve into." Ma Zeyu stated.
In such a system, even a single sentence generated by an agent due to hallucination, misjudgment, or random factors can trigger a butterfly effect through continuous interaction, amplification, and reorganization, ultimately leading to unpredictable consequences.
The second type of risk comes from the expansion of permissions and the blurring of responsibility boundaries. Ma Zeyu believes that the decision-making capabilities of open agents like OpenClaw are rapidly increasing, and this itself is an inevitable "trade-off": to make the agent a truly qualified assistant, it must be granted more permissions; but the higher the permissions, the greater the potential risks. Once risks truly materialize, determining who is responsible becomes exceptionally complex.
"Is it the foundational large model vendors? Is it the users utilizing it? Or is it the developers of OpenClaw? In many scenarios, it is actually difficult to define responsibility." He provided a typical example: if a user simply allows the agent to browse freely in communities like Moltbook and interact with other agents without setting any clear goals; and the agent, through long-term interaction, encounters extreme content and subsequently engages in dangerous behavior—then it is hard to simply attribute responsibility to any single entity.
What is truly concerning is not how far it has developed now, but how quickly it is moving towards a stage we have not yet figured out how to respond to.
4. How Should Ordinary People Use It?
According to several interviewees, OpenClaw is not "unusable"; the real issue is that: it is not suitable for ordinary users to use directly in the absence of security protections.
Ma Zeyu believes that ordinary users can certainly try OpenClaw, but the premise is to maintain a clear understanding of it. "Of course, you can try it; there is no problem with that. But before using it, you must first clarify what it can and cannot do. Do not mythologize it as something that 'can do everything'; it is not."
In practical terms, the deployment difficulty and usage costs of OpenClaw are not low. If there is no clear goal and it is just for the sake of "using it," investing a lot of time and energy will likely not yield returns that match expectations.
The reporter noted that OpenClaw also faces significant computational and cost pressures in actual use. During the experience, Tan Cheng found that the tool consumes a very high number of tokens. "Some tasks, like writing code or conducting research, can consume millions of tokens in a single round. If faced with long contexts, using tens of millions or even hundreds of millions of tokens in a day is not an exaggeration."
He mentioned that even by mixing different models to control costs, the overall consumption remains high, which also raises the usage threshold for ordinary users to some extent.
Interviewees believe that these types of agent tools still need further evolution to truly enter the high-frequency workflows of ordinary users. For individual users, the usage process essentially involves a trade-off between safety and convenience, and at the current stage, the former should be prioritized.
If it is individual users, Ma Zeyu clearly stated that he would not enable features like Notebook that could lead to free communication between agents, and would also try to avoid multiple agents exchanging information. "I want to be the main entry point for it to obtain information. All key information should be decided by humans whether to provide it. Once agents can freely receive and exchange information, many things will become uncontrollable."
In his view, ordinary users, when using such tools, are essentially making a trade-off between safety and convenience, and at the current stage, the former should be prioritized.
In this regard, industry AI experts provided clearer safety guidelines from an operational perspective during an interview with "IT Times":
1. Strictly limit the scope of sensitive information provided, only supplying the basic information necessary to complete specific tasks, and resolutely not inputting core sensitive data such as bank card passwords or stock account information. Before using the tool to organize files, proactively clean out any potentially sensitive content, such as ID numbers or personal contact information.
2. Be cautious in opening operational permissions, users should independently decide the access boundaries of the tool, not authorizing it to access core system files, payment software, or financial accounts. Disable high-risk features such as automatic execution, file modification, or deletion. All operations involving property changes, file deletions, or system setting modifications must be confirmed by a human before execution.
3. Maintain a clear understanding of its "experimental" nature, current open-source AI tools are still in the early stages and have not undergone long-term market testing, making them unsuitable for handling work secrets, important financial decisions, or other critical matters. During use, data backups should be made, and the system status should be regularly checked to promptly detect abnormal behavior.
Compared to individual users, enterprises need to systematically manage risks when introducing open-source agent tools.
On one hand, professional regulatory tools can be deployed; on the other hand, clear internal usage boundaries should be established, prohibiting the use of open-source AI tools for handling customer privacy, business secrets, or other sensitive data, and regular training should be conducted to enhance employees' ability to identify risks such as "task execution deviations" and "malicious command injections."
Experts further suggest that in scenarios requiring large-scale applications, a more prudent choice is to wait for fully tested commercial versions or to select alternative products backed by formal institutions with complete security mechanisms to reduce the uncertainty risks posed by open-source tools.
5. Full Confidence in the Future of AI
Interviewees believe that the most important significance of OpenClaw's emergence is that it instills confidence in the future of AI.
Ma Zeyu stated that starting from the second half of 2025, his judgment regarding agent capabilities has changed significantly. "The upper limit of this capability is exceeding our expectations. Its contribution to productivity improvement is real, and the iteration speed is very fast." As the capabilities of foundational models continue to enhance, the imaginative space for agents is being continuously opened, which will also become an important direction for his team's future investment.
He also pointed out a trend that deserves high attention: the long-term, large-scale interactions between multiple agents. This collective collaboration may become an important path to stimulate higher-level intelligence, similar to how collective wisdom is generated through interaction in human society.
In Ma Zeyu's view, the risks of agents need to be "managed." "Just as human society itself cannot eliminate risks, the key lies in controlling the boundaries." From a technical pathway perspective, a more feasible approach is to allow agents to operate as much as possible in sandboxed and isolated environments, gradually and controllably transitioning to the real world, rather than granting them excessively high permissions all at once.

This point can be seen in the layouts of various cloud vendors and large companies. Tianyi Cloud, where Tan Cheng works, recently launched a one-click cloud deployment and operation service supporting OpenClaw.
Cloud vendors are essentially productizing, engineering, and scaling this capability as a supporting service. It will certainly amplify value, with lower deployment thresholds, better tool integration, and more stable computing power and operation systems enabling enterprises to utilize agents more quickly. But it is also important to recognize that once commercial infrastructure connects to "high-permission agents," the risks will also be scaled up.
Tan Cheng stated that over the past three years, the speed of technological iteration from traditional dialogue models to task-executing agents has far exceeded expectations. "This was unimaginable three years ago." He believes that the next two to three years will be a critical window period determining the direction of general artificial intelligence, which means new opportunities and hopes for both practitioners and ordinary people.
Although the development speed of OpenClaw and Modelbook has far exceeded expectations, Hu Xia believes that "the overall risk is still within a controllable research framework, proving the necessity of building an 'endogenous security' system. At the same time, it is important to realize that AI is approaching the 'safety fence' of humanity at a speed faster than people imagine, and people not only need to further expand the height and thickness of the 'fence,' **but also need to accelerate the construction of the ability to 'flip the table' at critical moments, solidifying the ultimate safety line in the AI era."
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。