Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

Two 20 billion dollars: OpenAI and Nvidia are in a "reasoning war."

CN
PANews
Follow
2 days ago
AI summarizes in 5 seconds.

Written by: xiaopi

In December 2025, NVIDIA quietly spent $20 billion to acquire a company called Groq, which specializes in AI chips.

On April 17, 2026, OpenAI announced it would procure over $20 billion worth of chips from another AI chip company, Cerebras. On the same day, Cerebras officially submitted IPO documents to Nasdaq, aiming for a valuation of $35 billion.

The two amounts are almost identical. One is an acquisition; the other is a procurement. One comes from the world's largest AI chip seller, and the other comes from the world's largest AI buyer.

These are not two independent events; they are two symmetric actions in the same war. The battlefield is called: AI inference.

Most people haven’t noticed this war. It lacks explosive sounds, only lines of financial announcements and technical discussions circulating in the circles of Silicon Valley engineers. But its impact may be more profound than any AI launch event in the past two years—because it is redistributing control over what is almost certain to become the largest technology market in history.

What is inference, and why is the keyword for 2026 no longer "training"

Before discussing the two $20 billion amounts, it’s essential to understand the background: the battlefield of AI chips is undergoing a shift in focus.

Training and inference are two stages of AI computing power consumption. Training is about building models—feeding massive amounts of data to neural networks so they can learn a certain capability. This process typically occurs only once or is updated periodically. Inference involves using a model—each time a user poses a question, ChatGPT provides an answer, which is an inference request.

In 2023, the majority of global AI computing power spending was on training, with inference serving as a supporting role.

But this ratio is rapidly reversing.

According to market research data from Deloitte and CES 2026, inference accounted for 50% of total AI computing power spending in 2025; in 2026, this ratio is expected to jump to two-thirds. Lenovo CEO Yang Yuanqing stated more bluntly at CES: the structure of AI spending will completely flip from "80% training + 20% inference" to "20% training + 80% inference."

The logic isn’t complicated. Training is a one-time cost, while inference is an ongoing cost. GPT-4 is trained once, but it must answer billions of user questions daily; each conversation is an inference request. After large-scale deployment, the cumulative consumption of inference far exceeds that of training.

What does this mean? It means that the most lucrative part of the AI industry is shifting from "training chips" to "inference chips." And these two types of chips require entirely different architectural designs.

NVIDIA’s problem: chips designed for training are inherently not good at inference

NVIDIA’s H100 and H200 are monsters designed for training. Their core advantage is their extremely high computational throughput—training requires performing numerous multiplications on massive matrices, and GPUs excel at this "multi-core parallel computing."

But the bottleneck of inference is not computation; it is memory bandwidth.

When users ask questions, the chip needs to "move" the entire model’s weights from memory to the computation units before it can generate an answer. This "movement" process is the true source of inference latency. NVIDIA’s GPUs use external high-bandwidth memory (HBM), which inevitably introduces delays during this transportation step—this delay, multiplied by the scale of processing tens of millions of requests per second for ChatGPT, becomes a real performance bottleneck.

When OpenAI's internal engineers noticed this problem while optimizing Codex (a code generation tool), they found that no matter how they adjusted parameters, response speed was limited by the architectural ceiling of NVIDIA’s GPUs.

In other words, NVIDIA’s disadvantage in inference is not a matter of effort but a matter of architecture.

Cerebras’s WSE-3 chip took a completely different route. This chip is so large it requires wafer-level packaging—covering an area of 46,255 square millimeters, larger than a human palm—integrating 900,000 AI cores and 44GB of ultra-fast SRAM memory on the same silicon wafer. Memory is placed directly next to the computation core, shortening the "movement" distance from centimeters to micrometers. The result: inference speed is 15 to 20 times faster than NVIDIA’s H100.

It should be noted that NVIDIA is not sitting idly by. Its latest Blackwell (B200) architecture has improved inference performance fourfold compared to the H100 and is being deployed on a large scale. However, Blackwell is chasing a moving target—Cerebras is also evolving, and the entire chip market has seen emerging competitors, not just Cerebras.

NVIDIA's $20 billion: a letter of admission behind the largest acquisition in history

On December 24, 2025, NVIDIA announced its largest acquisition in history.

The target is Groq.

Groq is a competitor similar to Cerebras, focusing on SRAM architecture chips optimized for inference—it’s called LPU (Language Processing Unit), recognized at the time as the world's fastest inference chip service in public evaluations. NVIDIA spent $20 billion to acquire Groq’s core technology and founding team, including founder Jonathan Ross and several top chip engineers from Google's TPU team.

This is the largest acquisition since NVIDIA’s $7 billion purchase of Mellanox in 2019, increasing by threefold.

Many analysts believe the message behind this amount is far more important than the figure itself: NVIDIA recognizes there is a structural gap in inference, and it is significant enough to warrant spending $20 billion to fill it.

If NVIDIA genuinely believed its GPUs were unbeatable in inference, it would not need to acquire Groq. This amount is essentially a $20 billion technology procurement order—acknowledging the real technical advantage of SRAM embedded architecture in inference scenarios and recognizing that NVIDIA’s existing product line cannot naturally cover this advantage, ending up paying a premium to fill a technical gap it could not address itself.

Of course, NVIDIA’s official narrative after the acquisition is another story—"deep integration with Groq to provide a more complete inference solution." The translated version in technical language is: we realized that our resources were insufficient, so we purchased someone else's.

OpenAI's $20 billion: buying chips is just the surface; taking a stake is the key

Now, back to OpenAI.

In January 2026, OpenAI signed a three-year $10 billion compute procurement agreement with Cerebras—media reports at the time highlighted "OpenAI is diversifying its chip suppliers," glossing over the significance.

But the latest details revealed on April 17 fundamentally changed the nature of the transaction:

First, the procurement amount doubled from $10 billion to $20 billion.

Second, OpenAI will receive warrants for shares in Cerebras, with the stake potentially reaching up to 10% of Cerebras's total equity as the procurement scale increases.

Third, OpenAI will also provide Cerebras with $1 billion for data center construction—put simply, OpenAI is helping Cerebras build facilities.

When considered together, these three details paint a completely different picture: OpenAI isn't just buying chips; OpenAI is incubating a supplier.

This logic has clear precedents in technology history. In 2006, Apple began working with Samsung to customize A-series chips, initially through bulk procurement agreements, but as Apple deepened its involvement and eventually developed the M-series chips, control over the supply chain completely shifted from Intel and Samsung to Apple itself. What OpenAI is doing is somewhat similar—but with one crucial boundary: Apple had control over chip design from the beginning, while OpenAI is still a purchaser; after Cerebras goes public, it will develop independently and serve more customers. The endpoint of this path may not be OpenAI completely taking control of Cerebras but rather the two establishing a deeply interdependent ecological community.

On one hand, OpenAI is binding Cerebras with its $20 billion investment and equity stake to ensure a continuous supply of inference computing power without relying on NVIDIA; on the other hand, OpenAI is collaborating with Broadcom to develop its own ASIC chips, expected to enter mass production by the end of 2026. Moving forward on both fronts, the goal is computing power autonomy.

Cerebras today IPO: what are you buying

On April 17, Cerebras officially submitted its IPO application to Nasdaq, targeting a valuation of $35 billion and planning to raise $3 billion.

This valuation, rising from $8.1 billion in September 2025, represents an increase of more than four times. In February, it had just completed a new round of financing, rising to a valuation of $23 billion, making the IPO target of $35 billion a 52% premium over that.

Those familiar with Cerebras's history know this is its second attempt at going public. The first time, in 2024, because its core customer G42 (Emirates Sovereign Technology Investment Fund) accounted for 83%-97% of that year’s revenue, CFIUS intervened for national security reasons, forcing the IPO to be withdrawn.

This time, G42 has disappeared from the shareholder list, replaced by OpenAI.

In other words, Cerebras's structural issue of customer concentration has not been fundamentally resolved—the name of the major customer has changed, but the dependence on a major client remains. Investors must judge whether this major customer is better or worse. From a credit perspective, OpenAI is clearly superior to G42; from a strategic perspective, OpenAI is also an incubator for rival companies to Cerebras—once its self-developed ASICs mature, they pose a genuine replacement threat to Cerebras.

To be fair, Cerebras is actively expanding its other customers, and the prospectus is expected to list more diversified revenue sources, improving concentration. But before OpenAI's self-developed chips reach mass production, the answer to this question remains unclear.

When buying Cerebras stock, you are essentially betting on two things: OpenAI will continue to choose Cerebras, and OpenAI's self-developed ASIC will not arrive prematurely. Neither of these is guaranteed.

Of course, there are also genuine bullish arguments: if the inference market grows as predicted, even if Cerebras captures only a small share of this market, the absolute number would still be considerable. The question is whether the $35 billion pricing reflects the most optimistic scenario.

Two $20 billion amounts appeared symmetrically in the period from the end of 2025 to April 2026.

One comes from the world's largest AI chip seller, acquiring the technology of an inference market competitor.

The other comes from the world's largest AI buyer, incubating a company that challenges NVIDIA in the inference market.

NVIDIA's $20 billion is defensive—it has spent the highest price to fill a technical gap it could not bridge by itself.

OpenAI's $20 billion is offensive—it is burning money to construct an inference highway that does not rely on NVIDIA while acquiring a toll booth's warrant on that road.

This war has no gunfire, but the flow of funds never lies. What these two amounts tell you is clearer than any AI press conference: control over the infrastructure of AI inference is being contested. And this market will account for two-thirds of total industry computing power spending in 2026.

Cerebras's IPO is the clarion call of this war.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by PANews

27 minutes ago
Multiple institutions in Japan have launched a digital bond collateral management trial on the Canton Network.
35 minutes ago
OKX launched the USDT AAVE (ETH network) on-chain earning product.
47 minutes ago
The transformative era of DeFi collateral, exploring RWA as a new composable infrastructure for DeFi.
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarTechub News
4 minutes ago
The first statue of Satoshi Nakamoto in Hong Kong unveiled at the Web3 Carnival as MicroBit and HashKey join forces to advance Hong Kong's Web3 ecosystem towards a new milestone.
avatar
avatarPANews
27 minutes ago
Multiple institutions in Japan have launched a digital bond collateral management trial on the Canton Network.
avatar
avatarPANews
35 minutes ago
OKX launched the USDT AAVE (ETH network) on-chain earning product.
avatar
avatarTechub News
45 minutes ago
How significant has the impact of the "Financial Law" been on China's cryptocurrency sector?
avatar
avatarOdaily星球日报
47 minutes ago
When wallets start embedding AI Agent: Why is the new interaction paradigm of ERC-8211 worth paying attention to?
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink