Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

Claude transfer station business: The stricter the blockade, the more complete the gray market.

CN
深潮TechFlow
Follow
19 hours ago
AI summarizes in 5 seconds.
The real risk lies not in geopolitics, but in how this supply chain draws ordinary people—many of whom are already at a disadvantage—into the criminal market.

Author: Zilan Qian

Translated by: ShenChao TechFlow

ShenChao Intro: The White House claims that Chinese laboratories are stealing U.S. AI models using "tens of thousands of proxy accounts," but they misread the truth—this is not the sophisticated action of a few laboratories, but a gray market openly operating on GitHub, Taobao, Twitter, and Telegram. Any Chinese person wanting to use advanced AI tools, from professors to programmers to enthusiasts, is making use of API proxies, with prices as low as 10% of the official rate. This reveals a blind spot in the U.S. AI security framework: every layer of blockade gives rise to corresponding circumvention infrastructure, and the real risk lies not in geopolitics, but in how this supply chain pulls ordinary people—many of whom are already disadvantaged—into the criminal market.

On April 23, 2026, the White House issued a memo warning that Chinese entities are launching "industrial-scale" distillation attacks on U.S. cutting-edge AI models, evading detection using "tens of thousands of proxy accounts." In February 2026, Anthropic also reported that Chinese laboratories are conducting coordinated distillation attacks using "over 20,000 fraudulent accounts managed through a single proxy network." Both documents view "proxies"—the intermediaries between model users and providers—as a systemic means designed by a few cutting-edge Chinese laboratories to extract U.S. AI models.

Whether Chinese laboratories rely on distillation to "catch up" or not, both documents misinterpret the proxy economy they describe. Beneath a few laboratories exists a much larger market that has been openly operating on GitHub, Taobao, Twitter, and Telegram. This is a gray economy of API proxies (often referred to as "hubs") that allows Chinese developers to access Anthropic's models at costs as low as 10% of the official prices. Participants are far from just a few seasoned AI researchers; their motivations are far broader than merely building a cutting-edge model to catch up. Anyone wanting to use more advanced AI models or tools, whether they are university professors and students, tech workers, independent developers, or enthusiasts, is utilizing API proxies. The logs they generate may have already become a commodity, traded for various purposes ranging from model training to targeted fraud.

Meanwhile, every layer of control added by leading U.S. AI companies (geoblocking, phone verification, credit card requirements, and now real-time biometric KYC checks) generates corresponding circumvention infrastructure. The impact of these new SMS farms and biometric collection operations extends beyond geopolitics, influencing the design of the cutting-edge AI security framework.

Building on my 2025 article about accessing banned U.S. models in China, this update focuses specifically on the hub economy: how it is constructed, how it is monetized, and what it reveals about the limitations of access blockades and account monitoring as tools of AI governance. However, unlike the gray market of 2025, the story of 2026 does not stop at the borders between Chinese users and U.S. AI model providers. The hub economy exposes blind spots in the AI security framework, which is meant to prevent harms that transcend U.S.-China competition, from the abuse of malicious actors to the erosion of provider traceability, while fueling a criminal market that exploits ordinary people—many of whom are already disadvantaged—within the supply chain.

To illustrate how hubs operate, let's take Anthropic as an example, a company with the strictest geoblocking mechanisms and whose models are very popular among Chinese developers.

image

Image: A meme circulating on the Chinese internet: "Do you think you're smarter than Claude?"

Geoblocking and Identity Verification (KYC)

On the map of countries supported by Anthropic, China is noticeably absent, and on the Chinese internet, Anthropic is also nowhere to be found—technically speaking. In reality, neither Anthropic's blockades nor the Great Firewall can prevent Chinese users from accessing Claude and Claude Code. Since 2025, despite platform and government censorship, the Claude model has thrived on e-commerce apps like Taobao, while Singapore, with a population smaller than New York City, "surprisingly" led the global per capita usage of Anthropic Claude in April 2026.

image

Image: Chinese developers joke on Twitter about Singapore being the highest consumer of Claude tokens, implying that Chinese people are routing traffic through Singapore to use the model. "We sometimes feel like Singaporeans." "I allocate my nationality every day." "Is it because we all use Singapore nodes?" "Seems like many companies use Singapore nodes."

The Chinese government is currently not particularly aggressive in restricting Chinese developers' access to advanced U.S. models. On the other hand, Anthropic takes this seriously and has implemented multiple mechanisms to block users from mainland China. At the most basic level, account registration requires a phone number, a foreign credit card, and a matching billing address. On September 5, 2025, Anthropic further prohibited access from entities where more than 50% of the equity is directly or indirectly owned by companies based in unsupported regions such as China, regardless of where that entity operates. This closed a loophole that previously allowed foreign companies with Chinese backgrounds to retain API access through their subsidiaries.

The latest measures emerged in April 2026. Anthropic began requiring specific users to verify their identity using government-issued photo ID and a real-time selfie, making Claude the first major consumer AI platform to implement this level of identity checks. The rollout is selective, triggered by specific use cases or integrity flags on platforms. For Chinese users accessing Claude via VPN or other intermediaries, the new KYC policy theoretically makes it more difficult to access Claude—even if Chinese users can forge phone numbers and addresses, they would theoretically find it hard to forge real-time selfies that match physical government documents.

However, in reality, not only can Chinese people access Claude and associated tools, but most of the time they can purchase tokens at the original price of 10%. The magic lies in the "hubs."

What is a "Hub"?

A hub is what the Chinese developer ecosystem calls API proxies — an overseas server located between the developers and Anthropic's infrastructure. It receives API requests, pretends to originate from the location of the hub when forwarding, and then sends the responses back. Users redirect their software to the proxy's server instead of Anthropic's, and pay in RMB via WeChat or Alipay. This bypasses the VPN and foreign credit card required for direct access. Well-known hubs are cataloged in community repositories and ranked by real-time prices and uptime. Below them, smaller and individual projects come and go.

While this setup functionally sounds the same as legitimate Western API aggregators like OpenRouter, hubs operate in a completely different universe of legality and trust. Legitimate aggregators exist to simplify developer workflows and charge standard rates based on transparent corporate agreements. In contrast, hubs are explicitly built for evasion, routing data through irresponsible intermediaries.

Just like providing VPN services or selling Claude on Taobao, hubs are technically not permitted in China. According to China's regulations on AI service registration, AI services provided without filing and security assessments are illegal. But just as some small businesses can skip AI registration without punishment, most hubs can too. However, the larger the business, the less secure the operation.

The Supply Chain of Hubs

Hubs are not a single entity. They sit in the middle of a layered supply chain where most participants never interact directly with each other.

Upstream are resource providers: account merchants who bulk register or acquire Anthropic accounts; SMS verification platforms that provide foreign phone numbers required for registration checks; and on a more technical end, reverse engineers analyzing Anthropic client code for authentication shortcuts or detecting when detection logic changes. The payment infrastructure of card merchants and proxy networks also enables overseas billing from within China.

Upstream also deals with more complex KYC mechanisms—whether AI or human. AI services have demonstrated the ability to generate highly realistic fake IDs that can bypass identification verification on major platforms, while deepfake tools now allow criminals to create digital clones that can successfully pass remote biometric verification. Even if defenders can successfully detect AI disguising as humans, a more labor-intensive approach exists to find real people. Agents head to low-income countries in Africa or Latin America to recruit willing individuals to complete on-site verifications. The Worldcoin black market provides a documented precedent, with iris scans collected from KYC merchants in Cambodia and Kenya sold for less than $30.

image

Image: Twitter accountpromoting KYC verification services.

In the middle is the hub itself: a software interface that receives user requests and forwards them to Anthropic as though they originate from legitimate accounts; a payment integration (usually Alipay or WeChat); and a mundane operating layer that maintains its operations—reusing accounts until flagged, balancing loads in the pool, and continually adapting to Anthropic's abuse detection updates.

Downstream are the customers: individual developers using Codex or Claude Code, enterprises routing their internal workflows through proxies, application builders embedding APIs in their products, and secondary resellers wholesale purchasing access and repackaging it for individual customers on Taobao—as I documented last year.

Very few people operate the entire chain. Most participants possess one or two links and monetize them well, forming a resilient modular system. AI model providers can suspend individual operators, but the upstream account pool and downstream customer base remain intact. As long as there are developers wanting access to Claude and willing to provide credentials to the identity black market—both are persistent features—substitutes can be quickly established.

image

Image: A screenshot circulating in developer WeChat groups, humor about bypassing the Anthropic KYC process; original in Chinese (above), translated by the author below.

One Fish, Three Meals: How to Make Tokens Cheap

However, the strangest thing is not how to gain access to Claude or Claude Code in China, but rather how to obtain it at absurdly low prices—typically priced at 1 RMB per $1 token—70-90% lower than the official price. According to public discussions, hubs have at least three ways to make this possible—commonly described as "one fish, three meals."

The first meal: Access markup. This is possible because upstream resource providers can stack proxies using at least five relatively "innocent" strategies:

Bulk registering API accounts to collect Anthropic's $5 free quota

Reselling unused quotas from others' accounts

"APImaxxing"—a $200 Max plan splits its hourly token quota among multiple users, leveraging the disparity between Anthropic’s fixed subscription prices and the much higher equivalent per-token charges for API access

In addition, there is a darker upstream input: accounts purchased using stolen or fraudulent credit cards, resulting in zero actual costs to operators, allowing them to enter the proxy pool. How significant this portion is compared to the aforementioned four "innocent" strategies is hard to verify, but the two markets may share some underlying infrastructure and personnel.

The second meal: Token spoofing and output misreporting. Because user inputs and model outputs are routed through proxy intermediaries, users cannot verify which model their requests are actually routed to. A user may select Opus 4.7, but the proxy can quietly route to Sonnet, Haiku, or, in the worst case, to GLM or Qwen, fraudulently retagging the output. In a recent paper from Germany's CISPA Helmholtz Center for Information Security (citing my article on the gray market last year), researchers audited 17 API proxies and found widespread model spoofing—accessing "Gemini-2.5" resulted in only 37.00% on medical benchmarks, significantly lower than the official API’s performance of 83.82%. On the user end, it only reveals itself on complex tasks when the output seems incorrect (often referred to as "dumbed down"), but there is no concise way to prove it. A wealth of public records highlight concerns over certain API proxies that evidently degrade model performance. These proxies are suspected of "watering down" services by replacing high-end cutting-edge models with inferior ones.

In addition to spoofing models, excessive token consumption also makes each token cheaper, though at the cost of driving up total expenses. Some of this is structural, as proxies that frequently rotate accounts disrupt cache continuity as a side effect, forcing users to burn full-price tokens on contexts that would otherwise be nearly free. Some of this may be intentional, as proxy providers attempt to squeeze more usage from users. It is difficult to delineate the two from an external viewpoint.

The third meal: Logs are the product. This may be the most important part as it intersects with data privacy and distillation. Every request through the proxy—complete prompt, full response, tool calls, iterations—resides on the servers of the proxy operator. For AI coding proxies, these logs contain long chains of reasoning, real engineering decisions, repository context, and human-validated correct outputs. This makes them ideal datasets for post-training: for supervised fine-tuning on real engineering tasks, and for distilling Claude's reasoning patterns into smaller models while capturing the entire reasoning trajectory. The Chinese developer community asserts that this is happening at least in some cases, but whether proxy operators are systematically collecting and selling these logs, and to whom, remains unverified. However, downstream distilled data does exist on the open web. Several datasets of Claude Opus 4.6 reasoning outputs circulated on HuggingFace, with the source of the outputs unclear. In theory, similar distilled datasets could be cleaned and sold to other model developers in China.

The first two meals provide tokens cheaper than Anthropic’s official pricing, but to drop prices to absurd levels like 10% or even 5% of the original price requires consuming the third meal. As the Chinese saying goes, there is no free lunch in the world. Several Chinese developers reveal that the markup business is merely a means of customer acquisition, while the real profit source is the harvesting of logs. Users are both paying customers and unpaid data producers, trading their privacy data for lower prices with proxy operators. There are also warnings that data leaked by proxies could be used for marketing, scams, or even blackmail. To avoid privacy risks, some Chinese developers have also built their own Claude Code API proxies and open-sourced operational guides.

What Cannot Be Known Through Real Name Verification

The use of AI is gradually shifting from chatbots to tool usage. As the agent and token economy rises, the question of using U.S. models is no longer just about access, but extends to cost-effectiveness. This is because the Chinese AI ecosystem—whether cutting-edge laboratories, university research groups, independent developers, or enthusiasts—generally lacks funding. Meanwhile, the data generated by users through hubs clearly flows into downstream markets, being used for model training, data trading, or fraud. If distillation is also part of this economic system, then the issues extend far beyond what the U.S. government or AI companies anticipate as harms from a few cutting-edge participants.

History tells us that access blockades rarely stop determined users. Blockades increase access costs, thereby creating profitable markets for anyone capable of lowering costs. The Great Firewall has made VPN services a booming home industry in China. KYC requirements have spawned an economy of forged identities, ranging from domestic ID resellers to biometric gathering operations in Southeast Asia or Africa. The multi-layered controls of leading AI companies—geoblocking, phone verification, credit card requirements, and now real-time biometric checks—produce the same effects.

However, this story transcends the framework of "Anthropic/U.S. against China." It points to a disturbing truth about access control, whether at geopolitical borders or in broader contexts. A method for a geoblocked developer to circumvent controls is structurally identical to the methods that terrorists might use to access cutting-edge AI models and create destructive biological weapons undetected. Access issues are both unique geopolitical considerations and shared security concerns.

Nowadays, AI security research considers system-level access control—particularly the detection, monitoring, and account bans of publicly available closed-source weight models—as important safeguards. In terms of monitoring, developers control reasoning infrastructure, including real-time markings of harmful inputs and outputs. Detection (such as KYC requirements) assumes providers can attribute behaviors to identifiable participants, while account bans similarly assume that banning an account can effectively deny access. But U.S. model providers cannot control the reasoning of Chinese users routed through hubs—the proxy operators control that aspect. When harmful requests reach, AI model providers do not see the real users' IP but the proxy's IP. When an account is banned, upstream supply chains can easily establish new proxies within hours.

For more complex monitoring tools, the problem becomes even graver. Anthropic's Clio system is partially designed to detect coordinated abuse that is not visible on an individual conversation layer, working by identifying patterns across accounts and dialogues. For example, it identifies an automated account network generating search engine spam using similar prompt structures and subsequently bans them. However, because the requests are routed through proxies, banning does not effectively prevent the underlying behavior. For deliberately orchestrated attacks—such as dispersing harmful queries across multiple stages and proxy accounts, where each request appears harmless—cross-account patterns are much less obvious than coordinated spam content, which has naturally significant signals.

Finally, hubs not only embody traditional attack-defense paradigms—whether between U.S. AI companies and Chinese users, or between AI safeguards and malicious actors. The black market has its own supply chains and exploitation logic, producing harms that far exceed the initial access issues. Today, facial information harvested for proxy KYC verification to circumvent the Anthropic system can be resold tomorrow for opening fraudulent financial accounts, forging employment records, or generating deepfakes, while the original subjects from the Global South bear legal and reputational consequences. The infrastructure routing Claude requests can be used to deceive users through model substitution, targeted scams based on leaked prompt data, or blackmail. The account-granting operations maintaining the proxy pool—bulk SMS verification, fraudulent registrations, and account takeovers—nourish broader criminal markets for robocalls, phishing SMS, fraudulent loan applications, and credit card scams. Many harms are unrelated to AI or geopolitics.

Yet now every byproduct of the gray market—the potential dangers of terrorists using AI to synthesize the next pandemic, to real-world exploitation and crime—already exists. Although the Great Firewall or AI geoblocking seeks to delineate who can access cutting-edge technology by national borders, as the gray market reveals, harms cannot be compartmentalized.

Acknowledgments:

Zilan thanks Alan Chan, Gabriel Wagner, Karuna Nandkumar, and Kayla Blomquist for their helpful feedback.

The author acknowledges the use of LLM for preliminary desk research, technical concept clarification, and manuscript editing, and is indeed very grateful that she can still access Claude through Singapore nodes via VPN in mainland China without triggering the KYC process.

Information from informal conversations.

Application Programming Interfaces (APIs) are gateways that allow developers to connect software directly to AI models—sending requests to Anthropic's servers and receiving responses programmatically rather than via browser interaction.

Specifically, that involves replacing the ANTHROPICBASEURL environment variable with the proxy's address.

From informal conversations and desk research.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by 深潮TechFlow

7 hours ago
TechFlow Intelligence Bureau: AMD surges 15% after CEO doubles long-term forecast, OKX launches 263 tokenized US stocks.
8 hours ago
GameStop CEO sells socks: $56 billion acquisition of eBay, starting the collapse from a pair of socks.
12 hours ago
Is NFT making a comeback? Here's a simple explanation of the 60 times increase in Slonks over 6 days.
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarOdaily星球日报
1 hour ago
Next generation encryption security does not rely on devices but depends on isolation architecture.
avatar
avatar深潮TechFlow
7 hours ago
TechFlow Intelligence Bureau: AMD surges 15% after CEO doubles long-term forecast, OKX launches 263 tokenized US stocks.
avatar
avatarForesight News
8 hours ago
The growth history of the US stock market is, behind it, a history of American wars.
avatar
avatarPANews
8 hours ago
Cross-chain bridges are not "secure bridges"; analyzing recent attack incidents to uncover vulnerabilities in DeFi security.
avatar
avatarOdaily星球日报
8 hours ago
a16z partner rebuts AI apocalypse theory: Don't panic, technological transformation will enlarge the cake.
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink