Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

Two Americans observe Chinese AI and see the same thing.

CN
Techub News
Follow
2 hours ago
AI summarizes in 5 seconds.

Author: The Rust of Uncle Who Doesn’t Understand Classics

In April 2026, two significant articles about AI in China appeared almost simultaneously in mainstream American media.

One was published in The New York Times. The author is a journalist who has just returned from interviewing in China. He traveled around Beijing, Shanghai, Shenzhen, and Hangzhou, engaging in in-depth conversations with more than a dozen core figures in the Chinese AI field. After witnessing the development of AI in China, he believes that the United States cannot defeat or suppress China; they "can’t beat" China.

The other article was published in Foreign Affairs, authored by Jake Sullivan, the former National Security Advisor of the Biden administration. This was his first systematic public discussion about US-China technology competition since leaving the White House.

The perspectives and expressions of the two articles are completely different. The journalist writes about his observations, his experience riding in a self-driving car at Huawei's campus, and the exact words he heard while chatting face-to-face with CEOs of Chinese tech companies. Sullivan writes from a strategic framework, providing a comprehensive assessment of the US-China tech landscape formed during his four years in the White House, extensive and detailed, covering everything from semiconductors to biotechnology, from supply chains to clean energy.

One is bottom-up, the other is top-down.

However, after reading both articles, you will find something somewhat surprising: the outlines of the Chinese AI they saw overlap. And that outline has significant deviations from the image of Chinese AI portrayed in mainstream narratives in the US over the past few years.

1. The US and China are not on the same track in AI

Let’s first discuss the first thing that both individuals observed in common.

In Sullivan's long article in Foreign Affairs, he immediately made a self-correction. He stated that the US's positioning of Chinese technology over the decades has been based on a "quiet yet powerful assumption":

It is believed that Beijing is essentially running the same race as the US, just a few steps behind. China is seen as a imitator, adept at replication, lagging in innovation, and ultimately dependent on acquiring Western technology. America’s lead is viewed as enduring, even self-sustaining.

Sullivan's judgment is: this assumption has not been validated.

He believes that China is not chasing America's innovation on the same track. Instead, China is pursuing a different path: focusing on production scale, control of key raw materials, and mastering supply chain nodes. He provided several specific examples.

China currently produces over 70% of the world’s lithium-ion batteries and controls about three-quarters of global battery manufacturing capacity. In areas like rare earth processing and pharmaceutical raw materials, China has also established a dominant position. Sullivan's summary is concise: the US is running one race, while China is running another.

The US focuses more on maintaining a lead in breakthrough innovations, believing these breakthroughs will naturally convert into economic, military, and soft power. China, on the other hand, focuses precisely on the "conversion" itself, striving to turn technological advancements into practical capabilities in the economic and security domains.

If this judgment were merely coming from a desk in Washington, it might lack persuasive power. However, far from the White House, the journalist from The New York Times, through his field interviews in China, saw the same picture from completely different angles.

In 2022, the Biden administration implemented chip export control policies, attempting to curb China's AI development by cutting off high-end semiconductor supplies. The premise of this policy is: the high-end chipsets used for AI data centers are about the size of a skateboard, making it difficult to simply stuff them into a suitcase for smuggling, and without the onsite support of the chip manufacturer's engineering team, they are difficult to operate. Therefore, theoretically, the regulations should be effective.

However, the journalist discovered that the gap between reality and theory is much larger than imagined.

Chinese AI model developers found a simple workaround: training models on chips from other countries. A Chinese team only needs to rent computing power from an AI data center in a Southeast Asian neighbor to complete model training. Hiding the Chinese origin of the model is not difficult. Meanwhile, China is learning how to reduce reliance on a single high-end chip by stacking and combining lower-performance chips.

There is also a more clever path: distillation. Whenever a US lab introduces a cutting-edge model, Chinese competitors quickly reverse-engineer its capabilities and build versions with comparable performance using less computing power. The journalist wrote, "Latecomers are at a competitive advantage."

American AI scientists once had an argument to respond to the threat of "fast followers." They said that an "intelligence explosion" is imminent, whereby AI systems will soon be able to write their own upgrade codes, self-improving recursively. The country that first reaches this "singularity" will win the AI race, even if followers are only months behind.

Three and a half years later, AI indeed began generating code to upgrade itself, initiating this feedback loop. Yet the journalist's observation is: this has not widened the gap. Because the key to competition does not lie in the model itself.

A strategic judgment derived in Washington, and a fact observed firsthand in Shenzhen, point in the same direction.

"If China can print its own labor, then we (the US) will be in economic trouble..."

Once spending 500,000 to get into prestigious schools, now spending 500,000 to find a job: AI has pushed educational anxiety to the limit.

2. Deployment, Deployment, Deployment

The second point of intersection between the two articles is more specific: the key to AI competition lies not in whose model is more powerful, but in who can truly embed AI into the functioning of the economy.

This judgment may seem somewhat counterintuitive. Over the past two years, discussions about AI competition in tech media and social networks have almost entirely revolved around "models." Whose parameters are more numerous, whose benchmark test scores higher, and who has released the next generation of large models. Rankings have been refreshed again and again, each time triggering waves of anxiety or excitement.

Sullivan sees another dimension. In his article, he writes:

Merely being the first to discover new breakthroughs is not enough if others deploy them faster than you. Simply being ahead in design is also insufficient if the inputs and production capacity required for manufacturing are not controlled by the US or its allies.

He also cited research by political scientist Jeffrey Ding to support this judgment. Ding's research indicates that during the Cold War, the Soviet Union was not behind the US in many frontier technology areas, but it comprehensively failed in the technological diffusion and application stage. Technologies were invented but were not widely adopted or infiltrated into the economic and social bloodstream. This was the real reason for the Soviet Union's failure in the tech race.

Sullivan's implication is clear: the US cannot repeat the mistakes of the Soviet Union. Having the best labs is not enough; technology must be utilized.

The New York Times journalist provided concrete imagery for this strategic judgment through his personal experiences in China.

He noted that companies like Huawei and Hikvision, which are under US sanctions, are deploying AI systems in a range of real-world scenarios: high-speed rail maintenance checks, mining operations management, water sample scanning and analysis to assess pollution conditions. These are not demos in a lab; they are functioning industrial systems.

The journalist himself sat in a self-driving car at Huawei's campus near Shenzhen. There was a device in the passenger seat massaging his back, and in his own words, the vehicle's steering control was "perfect."

His conclusion echoes Sullivan:

The outcome of the AI competition is determined not by the ever-increasing capabilities of frontier models, but by AI deployment. To change the economic landscape, AI must be embedded in business processes. The original computing power of cutting-edge models must be converted into practical applications.

Sullivan approached from strategic theory, the journalist from onsite observations; both individuals completely converge on the judgment that "deployment is more important than models." For those accustomed to monitoring large model rankings to assess competitiveness, this shift in perspective may take some time to digest.

3. Manufacturing is the soil of AI

There is a section in Sullivan's article that reads more like a discussion of industrial history rather than AI. Yet it is precisely this section that elevates the question of "why China excels in AI deployment" to a deeper level.

For decades, the US has formed a widely accepted division of labor hypothesis: technical design and cutting-edge research are America's natural strengths, while manufacturing is a cost center that can be safely offshored. Sullivan believes this assumption has issues. His original words deserve close reading:

As manufacturing leaves, engineering knowledge also flows away. Over time, this brain drain erodes the feedback loop that supports technological leadership.

He referenced research by Nobel laureate Daron Acemoglu and Simon Johnson. The two economists outlined the history of the industrial revolution and discovered a rule: the innovative engineers who truly advanced inventions came largely from the manufacturing front lines and had practical backgrounds. They were not theorists sitting in studies but practitioners who had worked with machines and smelled machine oil in workshops.

From this, Sullivan drew a judgment:

A country that no longer produces things nor tinkers with technology will ultimately lose the capability to drive these technological advancements. A country that neglects the overall industrial base is losing institutional knowledge, supply chain control, and the depth and diversity of production, making it more difficult to establish strength in specific critical fields when necessary.

The speaker of this statement spent four years in the White House, directly designing technology policies towards China. This is not a view from an academic paper but a rehearsal from a decision-maker.

Looking back at the journalist's observations in China, there is an interesting correspondence. The AI deployment cases he witnessed—high-speed rail maintenance, mining management, autonomous driving, water quality monitoring—all thrived in the context of manufacturing and the real economy.

Huawei could integrate AI into self-driving cars only because it has accumulated decades of experience in communications hardware and electronics manufacturing. Hikvision can embed AI into industrial inspection processes precisely because it has deep roots in the research and production of security equipment.

The AI capabilities of these companies did not emerge spontaneously. Their foundation is manufacturing. Factories provide vast amounts of real-world scenario data; production lines offer the physical carrier for AI deployment; mature supply chain networks provide the possibility for scaling.

Sullivan theoretically demonstrated that "innovation cannot be separated from manufacturing" in Washington, while the journalist observed practical examples of this argument in Shenzhen. These two threads converge here, pointing towards a potentially long-underestimated fact: manufacturing is not the opposite of innovation; it is the infrastructure of innovation.

The Collapse of the Middle Class: Two Thousand Years of Management History Ends with an AI Cycle

AI2028-AI2027-AI2026: Countdown to Major Change and a Guide for Ordinary People to Save Themselves

4. The Same Diagnosis, Two Prescriptions

By this point, the observations of the two individuals are remarkably aligned. However, in response to the question of "what should be done next", they headed in different directions.

The judgment of The New York Times journalist is clear: the chip export control has failed.

His evidence comes from his field observations. China’s ways of circumventing the controls are not singular; multiple methods are simultaneously operating: renting data center computing power in Southeast Asian countries, stacking and combining low-end chips, and rapidly replicating the capabilities of American frontier models through distillation technology. He wrote:

Even if the Senate follows the House in passing a bill restricting China’s use of overseas data centers, China’s ability to circumvent controls will not change.

In his view, the problem with control is not merely that it "has not achieved the desired effect." The greater cost is that it has blocked another possibly more valuable pathway. His interviews in China led him to believe that Chinese tech elites indeed care about AI safety issues. If the US had initially chosen dialogue instead of blockade, the outcome might have been different.

He proposed a specific alternative: the US should negotiate an AI version of the Nuclear Non-Proliferation Treaty with China. Using the lifting of chip controls as leverage to obtain China’s participation in a global AI safety governance framework. He admits this may sound somewhat naive, but he believes that compared to a strategy of blockade doomed to fail, this route at least holds a possibility of success.

The Biden administration made a strategic choice to prioritize delaying China’s development over other concerns. An alternative could have been to tell China: you are a technology superpower, we are also a technology superpower. Let us work together to ensure AI does not fall into the hands of rogue states and terrorists.

Sullivan's position is different. He does not believe that controls should be abandoned; on the contrary, he systematically argues why they need to continue. The approach, however, should be adjusted.

He suggested continuing to emphasize a "small yard with high walls": the scope of control should be narrow and precise, focusing only on the most sensitive technology nodes, such as the most advanced semiconductors; but within this small scope, the control must be strong enough. He explicitly opposes total decoupling, arguing that maintaining trade in non-sensitive areas like agricultural products and basic consumer goods benefits American households. However, regarding advanced computing chips, he believes that easing restrictions means "voluntarily giving up one of the most decisive advantages currently held by the US and its allies."

In response to the doubt that "controls backfire and instead stimulate China’s self-research," Sullivan directly replied:

Chinese leaders had already prioritized independent chip research and development as a top national priority before these control measures were implemented, allocating substantial resources.

His message is clear: China’s chip self-research process was not "nurtured" by regulation; the control has simply made an already ongoing process more urgent.

Both individuals are in high agreement on the factual judgment: China's technological capabilities are much stronger than commonly acknowledged outside, and total decoupling is neither realistic nor desirable. The real divergence lies in a deeper issue: Should the diffusion of frontier technology be viewed as a natural process that will eventually find a way out like water, or can it be delayed and managed through precise intervention?

The journalist tends toward the former. Everything he observed in China tells him that the power of technological diffusion is stronger than any policy tool. Sullivan leans toward the latter. He believes that even if full prevention isn’t possible, delaying itself holds strategic value.

This divergence does not have a standard answer. But it reveals a deeper tension: as the capabilities of these two tech giants converge, the choice between "control" and "cooperation" becomes increasingly difficult to answer.

The surge in youth unemployment is just the prologue; we are experiencing the decline of "company jobs".

The hidden line of the future: AI is becoming the most powerful power distribution engine in history.

5. "You won't open-source nuclear weapons."

Another underlying line in both articles is easily buried under the noise of policy debates.

Regarding AI safety, there is a genuine, not performative, concern forming within the global tech community.

The New York Times journalist shared a detail. He visited a well-known Chinese tech company that builds and releases foundational AI models. The models of this company are currently open-source, meaning anyone can download and modify them. If someone uses it to launch a cyberattack, there is no mechanism to prevent it.

However, the CEO of this company candidly stated in front of the journalist something surprisingly honest: as AI becomes more powerful, it would be insane to continue insisting on open-sourcing.

You wouldn't open-source nuclear weapons.

Note who is saying this. It is not AI safety researchers calling for action in academic conferences, but a founder of a company currently operating open-source AI models, questioning his own business model.

The journalist also documented another scene. The high-level AI entity called "Lobster," OpenClaw, has sparked a download frenzy in China, with many ordinary users eager to experience this powerful AI assistant. On the surface, this enthusiasm seems to confirm a popular saying: Chinese people love innovation more than they fear it.

However, the researchers and industry leaders he interacted with had a completely different attitude. A well-known business school professor told him: OpenClaw leaves your computer "exposed." Shortly after, officials explicitly opposed using OpenClaw in government systems and warned the public that this intelligent entity could seriously harm personal data.

Across the ocean, Sullivan devotes an entire chapter in his strategic long article discussing the issue of AI safety standards. He advocates that the US should lead in establishing a standardized safety assessment system before the release of AI systems, creating joint screening protocols in fields where AI is increasingly integrated with synthetic biology.

He also responded to a sentiment that has gained traction in the US tech community: focusing on safety will slow down progress, causing the US to fall behind in competition. Sullivan’s rebuttal is uniquely angled:

Ensuring safety and trust will not make the US and its allies slower. Ultimately, it will make them faster. Because uncertainty is the root of hesitation. When decision-makers and industries lack confidence in safety and reliability, they are even less willing to adopt new technologies.

This logical chain is worth unpacking. Safety is not the opposite of speed. Uncertainty is. When it is unknown whether an AI system will fail, and who will be responsible if it does, the instinctive response of businesses and governments is to wait and postpone adoption. Establishing safety standards, conversely, is a prerequisite for eliminating this hesitation and accelerating technological diffusion.

Sullivan also mentioned an easily overlooked fact: in 2024, during a meeting between the leaders of China and the US, they reached a consensus on "the use of nuclear weapons must remain under human control." This indicates that even on the most sensitive issues, the dialogue window between the two major countries is not completely closed.

Putting this information together reveals an interesting picture. Chinese tech CEOs are saying "we cannot open-source nuclear weapons," the Chinese government is limiting unsafe AI products, and America’s former National Security Advisor is arguing that "safety allows you to run faster." These voices come from different countries, different identities, and different interests, but their underlying anxieties are interconnected.

When a technology becomes powerful enough, "who runs faster" is no longer the only question that needs answering. "How to ensure it doesn’t get out of control" becomes equally urgent. This may be one of the issues in the global AI competition where a common language can be found.

The Other Side of the US-China Shadow War: The Future of AI Is Not Just in the Models

In the future, a person may be without salary but cannot be without positioning.

6. The competition has just begun; it is far from the finish line.

As Sullivan nears the end of his article, he wrote: this competition is not a sprint; it is a marathon.

But on closer examination, marathons at least have a finish line.

The space race had a finish line: the moon landing. The nuclear arms race had a kind of finish line: ensuring mutual destruction’s equilibrium. What about the AI race? Sullivan himself acknowledges that victory will not be "a single moment of one side announcing victory." This competition will "continue indefinitely, spanning broad fields."

What does this mean? It means there is no node where one can "breathe easy" after winning. The nature of competition resembles a continuous state rather than a game with wins and losses.

The New York Times journalist offered another perspective. At the end of his article, he reflected on a period in Cold War history:

During the Cold War, the United States often maintained its interests by shifting from confrontation to détente: the signing of the Nuclear Non-Proliferation Treaty occurred just six years after the Cuban Missile Crisis. Now is a good time to reflect on that history.

Six years. From almost destroying the world to sitting down to sign a treaty. Historical transitions can sometimes occur more swiftly than people might assume.

Two Americans observing Chinese AI saw the same thing: China has embarked on a technological path different from the pre-set script of the US, the industrial deployment of AI is becoming the real focus of competition, and AI safety is transitioning from an abstract issue to a tangible reality that everyone must address.

Their differences on "what to do" are also real. The journalist sees the unstoppable diffusion of technology, while the strategist perceives the necessity of precise control. Each judgment has its reasoning, but also its blind spots.

However, if we take a step back and view these two articles from a distance, we can find a meaning that neither person explicitly stated but both implicitly suggest: in this competition without a finish line, what may truly determine position is not whose model scores higher, but who embeds technology deeper into the soil of the real world. 【Understanding】

Uncle Who Doesn't Understand Classics has recently created a website: https://budongjing.com

It features high-leverage hardcore content and in-depth originals sifted through human diligence, focusing on four key factors: technology, culture, wealth, and power, as well as their interactions and influences. The website is priced at 1999 yuan/year, currently offering a beta discount price of 1299 yuan/year. Interested readers can subscribe by adding Uncle Jing on WeChat.

Image

Additionally, feel free to join Uncle Jing's Knowledge Planet. The Planet focuses on the cutting-edge of AI technology and business, the underlying logic and top-level design of one-person companies and personal IPs, as well as cultural narratives and wealth cognition.

I am Uncle Jing who doesn't understand classics, the earliest in China to translate and introduce Naval's "How to Get Financial Freedom Without Luck", as well as "The Sovereign Individual", which influenced big names like Naval, Satoshi Nakamoto, and Musk.

The Knowledge Planet of Not Understanding Classics has many subscriptions from big accounts with millions of followers and billionaires. It focuses on sharing themes of one-person enterprises and one-person venture capital, with keywords: AI, IP, venture capital, and cutting-edge technology and business with high leverage content.

Image

Peak Report: The Doomsday Prophecies of AI, Cryptocurrency, and Blockchain

Peak Report: What Kind of Wealth Transfer is Stablecoin?

If I were 25 now, I would go all in on these two directions.

The future's hard currency food chain: code=data=information=content=traffic=attention=currency=capital. The final economics: within 1000 days, AI will bring irreversible phase changes to the human economy.

One yuan equals one million: the lightning-speed inflation and the collapse of traditional economics.

The game has really changed; when "The Big Short" turned off the fund and started a paid group.

A major weighty analysis: the golden rules and key lines of the wealth game in the next decade, the essence of information asymmetry lies fundamentally not in information.

Musk's latest recommended text: Why Culture Will Win? (Deep cognitive wars, technology, and even self-media).

Image

The more you understand, the freer you become.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by Techub News

1 hour ago
The Iran conflict and energy shocks have exposed the vulnerabilities of the European economy.
2 hours ago
Relying on AI to assess content is the only way to truly unleash the destructive power of long-form X.
2 hours ago
Why did a 5 million digital renminbi loan allow asset-light companies to receive lifesaving money overnight?
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatar律动BlockBeats
11 minutes ago
From "Silicon Valley Divine Shoes" to "GPU Computing Power": The Absurdity and Logic of Allbirds Renaming to NewBird AI
avatar
avatar律动BlockBeats
42 minutes ago
These 25 Claude prompts can give you an extra 15 hours each week.
avatar
avatar律动BlockBeats
48 minutes ago
How dangerous is Mythos? Why did Anthropic decide not to publicly release the new model?
avatar
avatar律动BlockBeats
1 hour ago
CZ "Binance Life" First AMA After Release: Entrepreneurial, Investment, and Life Advice for Ordinary People
avatar
avatar律动BlockBeats
1 hour ago
The price has doubled in four months; how much longer will memory cards continue to rise?
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink