Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

Regarding Claude's strict prevention of Chinese users, I have some concerns and suggestions.

CN
Techub News
Follow
4 hours ago
AI summarizes in 5 seconds.

Author: Meng Yan's Thoughts on Blockchain

Recently, Anthropic announced further enhancements to KYC requirements and imposed stricter restrictions on certain regions and access, which has begun to spark increasing discussions among Chinese users. The reactions within China to this matter can be categorized into several types. Some are concerned that this will make it harder for Chinese large model companies to continue taking shortcuts through model distillation. Others worry that if local developers increasingly find it difficult to stably use Claude Code, they will gradually fall behind in AI programming. There are also those who feel that the situation is not as dire, after all, there isn't only one Claude in the world, and China's own large models are strong, "Is it the end of the world if butcher Zhang is not around?"

These thoughts are understandable, but frankly, I have another layer of concern that might be more important to think about than "losing one model" or "losing one programming tool."

My concern is that Chinese entrepreneurs, managers, and startups are gradually being isolated from an increasingly prosperous Claude intelligence ecosystem. This is not merely an access issue or just a programming tool issue, but rather a question about how to work with AI in the future, how to organize AI, and how to manage AI intelligence teams.

I live abroad and have free access to the Claude platform and ecosystem. For some time now, I have been seriously exploring one question: how to build, manage, nurture, and iterate a truly functional AI intelligence team. Not just having AI chat with me, or occasionally help me write copy, but having them work together like a small organization, collaborating around clear goals, continuously handling tasks, and gradually taking on work that would originally require human effort. Throughout this process, I have increasingly felt that the importance of Claude is no longer just about having a "strong model" or "good at writing code." Claude Code, according to its official definition, is already an agentic coding tool. What has formed around it is not just a product, but an increasingly mature set of workbenches, methods, experiences, and discussion environments.

If in the past few years, many people understood AI mainly as a smarter search box, or a better conversational bot, I increasingly feel that in the coming years, the truly important question will shift to another matter: Will we be able to organize AI? Can we manage AI teams? Will we be able to have a group of agents stably work for us?

And in this regard, the Claude ecosystem is currently leading the way.

1. Why I Started Studying "AI Intelligence Teams"

Recently, I have been attempting to build two AI intelligence systems.

One is investment-oriented, mainly because the development of RWA allows users familiar with blockchain to freely buy and sell on-chain US stocks and ETFs. So, I hope to use the AI intelligence system to create an investment advisory team that can help me with industry and company analysis, balancing macro factors, news, and market sentiment while managing my positions. I previously had no accumulation in investing in US stocks, and I hope this system can help me avoid making detours. The other is finance-related, able to help me handle some ongoing financial matters, such as organizing expenses, tracking account changes, summarizing investment records, optimizing health insurance and car insurance, applying for various government subsidies, generating periodic observation reports, and reminding me of matters that require my judgment.

To be honest, neither of these systems are ready yet, so I can't present them to others. However, through the hands-on process, I have gained two insights.

The first insight is that the truly important question is no longer "how to use one AI well," but rather "how to organize and manage a group of AIs."

Specifically, how to have different agents take on different roles, connect with each other, know when to gather information, when to call tools, when to pass results to another agent for further processing, when to pause and wait for my confirmation, and when to remember past events for future use.

This is entirely the work of a team manager.

In the past year, discussions about this have already been taking place in both the Chinese and English-speaking worlds. When the domestic lobster (OpenClaw) gained traction, many in the Chinese community enthusiastically discussed the concept of "one-person companies." Internationally, there has also been a lot of discussion and practice around "Zero Human Companies." These ideas sound fresh and catchy, but I see them as merely special forms of AI agent organization.

An AI intelligence organization refers to a collaborative organization made up of multiple AI agents with specific goals and tasks. It doesn't necessarily have to be a company. It might not even be a fixed structure. It could just be a small team temporarily assembled around a clear task, which includes a few resident agents and may also temporarily connect external agent members to complement certain capabilities. Sometimes it resembles a department, other times a project group, and at times it might just be an automatically operating task team. From a more specialized perspective, it is essentially close to what people commonly refer to as a multi-agent system. I personally prefer the term "intelligence organization" because it makes it easier for non-technical individuals to grasp the focus: we are no longer dealing with just a clever tool, but a collaborative structure that can be designed, trained, managed, and iterated.

Once you start to understand the problem as "organizing a group of AIs," your perspective on many tools will fundamentally change. You will no longer care only about whether it can write, answer questions, or search. You will begin to care about a different set of questions. Can it execute tasks long-term? Can it stably call external tools? Can it handle the confusion of information as context grows longer? Can it collaborate with other agents? Can it recover after failures? Can it maintain a generally controllable state after multiple rounds of work?

The second insight is that Claude is indeed very strong in this agent ecosystem.

Claude's dominance is not just because "the model feels smarter," but rather because it is forming a very suitable environment for researching and practicing agent organizations. Speaking solely of the Claude model itself, aside from the "fear marketing" effects that Anthropic has mastered, many people actually feel it is not that exceptional; at least, it does not clearly outperform ChatGPT and Gemini. However, around Claude, there are not just the model itself, but increasingly more products, documentation, interfaces, methods, and discussions related to agent workflows. Anthropic has not only explicitly defined Claude Code as an agentic coding tool, but is also continuously providing related capabilities and documentation around agent workflows.

This environment is meaningful not just for programmers. If only programmers benefited from this, the impact would still be limited, given that very few people need to write code for software. What is more significant is that the Claude ecosystem is gradually converting part of the capabilities that were originally meant for professional developers into something that non-technical individuals can also approach, understand, and use. In the current AI ecosystem, not only is it a frontrunner, but its advantage is also expanding.

If Chinese users, especially those who do not understand programming but need to quickly learn "how to organize AI," find it challenging to stably access this ecosystem for an extended period, then what we are losing may not only be a tool, but also a training ground.

2. What is Agentic AI, and what is an "AI Intelligence Organization"?

After I started building those two intelligence agents, I often found myself stuck on a basic question: what exactly am I using? How does it fundamentally differ from the regular ChatGPT and Claude conversational modes we have used in the past?

Later, I carefully reviewed some materials, especially Andrew Ng's famous Agentic AI course, and that helped clarify my understanding of the matter.

In simple terms, Agentic AI is not a "one question, one answer" chat model. It is more like a system that can autonomously take steps, proactively call tools, check intermediate results, and adjust its path if it detects anything amiss. For example, if you asked a regular chat bot to help you book a flight, it might just tell you, "What flights are there from Beijing to Shanghai?" An Agentic AI, on the other hand, would search for flights, compare prices, check if the timing is suitable, confirm your preferences (window seat or aisle seat), and if it finds that the direct flight is too expensive, would actively suggest a connecting option, ultimately providing you with the complete itinerary and the payment link together. It is not a passive responder, but rather works like a reliable travel assistant who carries out the whole process step by step in the background.

Taking it a step further leads us to "AI Intelligence Organization." At this point, it is no longer a single agent working alone but a small team made up of several agents, each with specific roles, working collaboratively.

For instance, in my little experiment, I can assign one agent to specifically gather the latest financial information, another agent to conduct data analysis and risk assessment, a third agent to generate reports and verify the accuracy of the numbers, and finally, a fourth agent to summarize the report in my accustomed tone and send it to my email. They exchange information, remind each other, and confer on solutions when issues arise. Together, they resemble a small financial team, except all members are AIs.

Why is this especially important for entrepreneurs, managers, and startups? Because it excels at solving the most painful "organizational problems" in our daily work—how to break down processes, how to allocate tasks, who is responsible for tracking, how to conduct reviews when issues arise, and how to avoid missing long-term tasks. These tasks are typically the most time-consuming and energy-draining parts of traditional management. Now, AI intelligence organizations can take on these repetitive, structured tasks, allowing us to focus our energy on truly human judgment and creativity.

I have gradually come to a not-so-striking but important conclusion: one significant capability for future leaders may not be merely knowing how to use AI or understanding a bit about AI, but rather the ability to create, manage, nurture, and iterate AI intelligence organizations. This might become as fundamental a skill for managers as being able to use spreadsheets or host video conferences today. It's not because it sounds advanced, but because it can genuinely free our time and attention from trivial tasks.

This may sound a bit ahead of its time, but upon reflection, it is not an exaggeration. Over twenty years ago, whether a manager could use email, or spreadsheets, or process workflows in information systems was also seen by some as a "technical question." Today, we all know that these have long since become part of management capabilities.

I believe that AI intelligence organizations will travel a similar path.

3. The Claude Ecosystem is Becoming a Leadership Training Ground for "Intelligence Organizations"

I increasingly value Claude, not just because the model is strong or its coding capabilities stand out. More importantly, it is slowly maturing into a workbench suitable for researching and training "intelligence organizations."

This is first evident in its self-positioning. In its official documentation, Anthropic directly defines Claude Code as an agentic coding tool. The official product page even goes further: internally at Anthropic, most code is now completed by Claude Code, with engineers shifting more toward architecture, product thinking, and the ongoing orchestration of multiple agents. This statement is quite thought-provoking. It suggests that we are moving beyond merely "AI can write code," to "the role of humans is beginning to shift to organizing and scheduling agents."

First, the Claude ecosystem is no longer a single conversational model. It continues to expand around agent workflows, with products and functions converging in the same direction: making it easier for you to define tasks, allocate roles, set memory boundaries, handle errors, and achieve long-term execution. The focus is no longer just on "conversational ability," but on "how to organize a group of AIs to work."

What interests me even more is that Claude's advantages in programming are naturally spilling over into advantages for intelligence organizations. It has attracted a large number of programmers and developers who have accumulated a wealth of practical engineering experience over the past few years—such as how to encapsulate complex agent skills into reusable modules (harness engineering), how to manage long contexts, how to design error recovery mechanisms, and how to orchestrate multi-step workflows. These methods, which were once circulated only within the programming community, are now being distilled, packaged, and abstracted into more understandable forms, gradually reaching broader communities.

These methods are progressively helping non-technical users master agents. The value of Claude is not just that it allows programmers to write code faster; it is gradually transforming the experiences that were originally difficult to transfer from the programmer world into abilities that non-technical managers can also gradually access. For example, I, a complete non-programmer entrepreneur, can now use the interface and documentation provided by Claude to define a complete process of "first researching data, then analyzing risks, finally generating a report and reminders," without needing to build the underlying logic from scratch.

Thanks to so many developers who are continuously experimenting, summarizing, and sharing, Claude's intelligence ecosystem has formed a stronger positive feedback loop: the more active the ecosystem, the faster the evolution, and the more healthy the interaction; the healthier the interaction, the more excellent practices are attracted and the more clearly the methodology is established. This is a self-reinforcing cycle, and this cycle is currently running particularly smoothly in Claude.

This is especially important for non-technical managers. What we really need is often not maximum freedom, but rather a sufficiently mature training ground with rich case studies and clear methods. Here, you can first observe how others do it, then try out your business scenarios in small steps, learning as you go without having to start from scratch each time.

Thus, I increasingly feel that the strongest aspect of Claude may not just be its model's capability, but that it is transforming the intelligent engineering methods distilled from the programming world into organizational capabilities that non-technical managers can gradually master. This is the reason why I truly feel it is becoming the leading training ground for "AI Intelligence Organizations."

Some may say that using agent organizations can be achieved with lobster (OpenClaw) as well, and the domestic Manus excels in the field of agent applications. Do you really need to praise Claude this much?

I think both lobster and Manus are quite successful. OpenClaw has allowed many Chinese users to genuinely feel for the first time that AI can do more than just chat; it can truly perform tasks on behalf of people. Manus is also very representative, showing that Chinese teams lack no creativity in agent productization, and Manus also provides API, integration, and custom connection capabilities; it's not a completely closed product.

However, I increasingly feel that products and ecosystems are not the same thing.

The issue with OpenClaw is not that it is not strong, but rather that it seems more like a system intended for strong users. It clearly identifies its target users as developers and power users. It requires users to handle deployment, configuration, permission boundaries, and onboarding issues themselves. Such offerings have great value, but they naturally suit those willing to tinker rather than serving as a training ground where ordinary entrepreneurs, managers, and non-technical enthusiasts can easily enter and continuously practice "how to organize AI teams."

Moreover, judging by the heat of discussions in the Chinese community, the recent wave of interest in OpenClaw has faded quickly. According to the WeChat discussion heat index, the discussion heat for lobster in early April was less than a quarter of what it was a month ago. I tend to believe this is related to its higher threshold, complicated installation and configuration, and numerous permission requirements, along with the fragmented nature of the domestic ecosystem. Major companies are all pushing their own solutions, fighting their own battles, and the lack of strong interoperability makes it hard to form a common training ground.

The situation for Manus is a bit different. Its issue is not that it cannot be developed but rather that, at least for now, it is still mainly being used as a strong product. It has capability, interfaces, and space for imagination, but it has not yet attracted sufficient intense developer methodology accumulation like Claude, nor has it significantly developed "AI Intelligence Organizations" into a complete ecological main axis.

Products can be substituted, but training grounds are not easily replaceable; entry points can be substituted, but ecological closed loops are not easily replaced. Both OpenClaw and Manus deserve our attention and support, but at least for today, they still fall short of the value of the Claude ecosystem in "cultivating the intelligence organization capabilities of non-technical managers." This gap is not merely a matter of one feature being stronger than another but encompasses comprehensive differences in overall ecological maturity, the speed of methodology accumulation, and accessibility for non-technical users.

4. What I Truly Worry About is Not Losing a Tool, but Chinese Managers Missing Out on This Learning Curve

If it were merely about losing a tool, the problem would not be that significant. Tools will always be updated and there will always be alternatives. Today one may be stronger, tomorrow another may outperform it; this is just the norm in the technical world.

What I truly worry about is another matter: in the coming years, what could widen the gap is not "who first uses AI," but rather "who learns to organize AI sooner."

Financial management and home management are just the small experiments I am currently undertaking. Looking ahead, such multi-agent systems will almost certainly enter real business processes more frequently. Sales lead screening, customer service, investment research assistance, financial organization, recruitment collaboration, legal pre-screening, internal operations—all these tasks share a common attribute: they are not single steps, but a series of interconnected actions. Because of this, they are naturally suited for handling by intelligence organizations rather than relying solely on a chat bot for temporary answers. Anthropic's ongoing investments in managed agents, long-running agents, and multi-agent systems itself indicates that this path is not a niche endeavor, but is rapidly progressing toward more mature workflow forms.

Looking even further ahead, AI intelligence organizations will not remain in the phase of "a person managing a group of agents" for long. They are likely to quickly evolve into a more open network of agents. At that time, a task may not necessarily be completed solely by agents from the same company or team, but could involve collaboration among agents from different entities, platforms, and professional directions. They would communicate with each other, exchange results, and utilize each other’s capabilities; in some scenarios, mechanisms for mutual payment, mutual authorization, and mutual contract formulation may even gradually develop. Anthropic's open research on multi-agent systems has already discussed structures whereby multiple agents cooperate through tool loops. Once we reach that point, competition will no longer be just between individual models, products, or teams, but distinct network effects will begin to emerge. Those who enter such a network first will more easily accumulate interfaces, collaborative relationships, task routing, and operational experience. At that stage, trying to catch up will be significantly more challenging than it is today.

This is also why I feel that Claude is not merely a useful model. Because it is providing not only functionality, but an increasingly mature learning environment. Others have been experimenting repeatedly in this environment, discussing how to build agent teams, how to establish long-term memory, how to handle context, how to design roles, and how to integrate skills and tools into actual processes. If we remain absent for the long term, we may not incur severe losses in the short term, but we are likely to fall behind in terms of cases, methodologies, and organizational experiences.

Therefore, my worry has never been about Chinese users lacking an AI tool, but whether Chinese-speaking entrepreneurs, managers, and startups will fall behind on this crucial learning curve.

5. A Few Recommendations

This article is not meant to create anxiety but to remind everyone to pay attention. To solve a problem, one must first recognize the existence of the problem. And once the problem is confirmed, there are always solutions. I believe there are a few things that can be done.

First, on a personal level, undertake some small-scale multi-agent experiments as soon as possible. Don’t treat AI merely as a chat tool; instead, try to have two or three agents collaborate and complete a full mini-process—even if that’s just helping you organize a weekly report, track project progress, or manage a personal knowledge base. Taking action is far more valuable than just observing others share.

Second, at the management level, I suggest that managers personally step in to try. This is not solely a technical issue, but a future organizational one. Only by hands-on building, debugging, and iterating can one truly understand what changes AI intelligence teams can bring, and better guide their teams through the cognitive upgrade from "managing people" to "managing AI teams."

Third, in terms of learning, rather than becoming infatuated with any single-point product, it would be better to prioritize learning transferable methods. Focus on core capabilities such as workflow design, allocation logic, memory management, results verification, error recovery, and multi-agent collaboration. Regardless of which platform is ultimately used, these methods can help us more quickly establish our own intelligence organization capabilities.

Fourth, at the industry level, domestic large model companies should step up to take on this responsibility. Since Anthropic's agent ecosystem rejects Chinese users, domestic large model companies should bear the responsibility of building a similarly prosperous ecosystem domestically.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by Techub News

6 minutes ago
The real reason behind the real estate monopoly in the Middle East RWA market.
36 minutes ago
Goldman Sachs comes in: When Bitcoin is fitted with "training wheels"
51 minutes ago
Claude is starting to require identification cards, how far are we from losing digital privacy?
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarTechub News
6 minutes ago
The real reason behind the real estate monopoly in the Middle East RWA market.
avatar
avatarOdaily星球日报
29 minutes ago
Institution Weekly Report: Oil prices fell by 14%, Uniswap returns to the top of the spot trading volume.
avatar
avatarOdaily星球日报
35 minutes ago
Can blindly follow Polymarket pre-game winning probability trading in the NBA guarantee a profit?
avatar
avatarTechub News
36 minutes ago
Goldman Sachs comes in: When Bitcoin is fitted with "training wheels"
avatar
avatarTechub News
51 minutes ago
Claude is starting to require identification cards, how far are we from losing digital privacy?
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink