Author: Luo Yihang

In January 2009, an anonymous person invented something called "token." You invest computational power, obtain a token, which circulates, is priced, and traded within a consensus network. The entire crypto economy was born from this. More than a decade has passed, and people are still debating whether this token has value.
In March 2025, a man in a leather jacket redefined another type of token. You invest computational power and produce a token, which is consumed immediately in an AI inference and reasoning process: thinking, reasoning, writing code, making decisions. The entire AI economy accelerated from this. No one debates whether this token has value because you just used millions of them this morning.
Two types of tokens, the same name, the same underlying structure: computational power goes in, valuable things come out.

In March 2026, I sat in the NVIDIA GTC conference hall, listening to Jensen Huang give a keynote speech that hardly showcased any products. Yes, he announced Vera Rubin, a product that combines CPU and GPU. But this time, he didn’t talk about chip specifications or manufacturing processes; he talked about a complete economics of token production, pricing, and consumption:
Which model corresponds to which token speed; which token speed corresponds to which pricing range; which pricing range requires what level of hardware to support.
He even helped the CEOs and decision-makers with the corporate checkbooks in the audience devise a computational power allocation plan for data centers: 25% for free tier, 25% for mid-tier, 25% for high-end, 25% for high-premium tier.
Yes, this time he didn’t sell a specific GPU setup, unlike two years ago when he sold Blackwell. But this time, he was selling something bigger. After two hours, I felt that the one sentence he most wanted to convey was: Welcome to consume tokens, and only Nvidia's factory could produce.
In that moment, I realized that this man and the anonymous person who mined the first token 17 years ago were engaging in the structurally identical act.
The Same Transformation Rules
The anonymous person known as "Satoshi Nakamoto" wrote a nine-page white paper in 2008, designing a set of rules: invest computational power, complete a mathematical proof (Proof of Work), and receive crypto tokens as rewards.
The brilliance of this rule lies in its requirement that no one needs to trust anyone else—just by accepting this set of rules, you automatically become a participant in this economy. This rule is correct; after all, it brought together so many deceitful people.
And Jensen Huang did something structurally identical on the GTC 2026 stage.
He displayed a graph that highlighted the relationship and tension between reasoning efficiency and token consumption: The Y-axis represented throughput (how many tokens produced per megawatt of power), and the X-axis represented interactivity (the token speed perceived by each user). Then, he marked five pricing tiers below the X-axis: Free using Qwen 3, $0/million tokens; Medium using Kimi K2.5, $3/million tokens; High using GPT MoE, $6/million tokens; Premium using GPT MoE 400K context, $45/million tokens; and Ultra, $150/million tokens.
This graph could almost serve as the cover for Huang’s “Token Economics” white paper.

Satoshi defined "what valuable computation is"—completing a SHA-256 hash collision is valuable. And Huang defined "what valuable reasoning is"—producing tokens at a specific speed for specific scenarios under given power constraints is valuable.
Neither Satoshi nor Huang directly produced tokens; they defined the production rules and pricing mechanisms for tokens.
One statement from Huang on stage could almost be directly included in the abstract of a token economics white paper:
Tokens are the new commodity, and like all commodities, once it reaches an inflection, once it becomes mature, it will segment into different parts.
Tokens are the new commodity. Commodities naturally stratify once they mature. He wasn’t describing the current situation; he was predicting market structures and then precisely laying out his hardware product line across every layer of that structure.
The production processes of the two types of tokens even have a semantic symmetry: mining is called mining, and reasoning is called inference.
The essence of mining and reasoning is both turning electricity into money. Miners spend on electricity to mine crypto tokens and sell them, while reasoning models and AI agents spend electricity to generate AI tokens, which are then priced by the millions and sold to developers. The intermediary steps differ, but both ends are the same: the left side is the electricity meter, and the right side is the revenue.
Two Ways of Writing Scarcity
The most significant design decision made by Satoshi was not Proof of Work, but the capped total supply of 21 million bitcoins. He created artificial scarcity with code—regardless of how many mining machines flood in, the total supply of bitcoin will never exceed 21 million. This scarcity is the value anchor for the entire crypto economy.
On the other hand, Huang created natural scarcity using physical laws. He said:
"You still have to build a gigawatt data center. You still have to build a gigawatt factory, and that one gigawatt factory for 15 years amortized... is about $40 billion even when you put nothing on it. It's $40 billion. You better make for darn sure you put the best computer system on that thing so that you can have the best token cost."
A 1GW data center will never become 2GW. This is not a code limitation; it's a physical law.
Land, electricity, cooling—all these have physical limits. The amount of tokens you can produce from the factory you built for $40 billion over its 15-year lifecycle entirely depends on what computational architecture you put in it.

Satoshi’s scarcity can be forked. If one dislikes the cap of 21 million, they can fork a new chain, change it to 200 million, call it Ethereum or whatever, and casually publish a new white paper. And indeed, people have done so, enjoying it immensely.
However, Huang's manufactured scarcity cannot be forked. After all, you cannot fork the second law of thermodynamics, you cannot fork the capacity of a city's power grid, and you cannot fork the physical area of a piece of land.
But whether it is Satoshi or Huang, the scarcity they created led to the same result: an arms race in hardware.
The history of mining is: CPU→GPU→FPGA→ASIC. Each generation of specialized hardware rendered the previous generation obsolete. And the history of AI training and reasoning is repeating itself: Hopper→Blackwell→Vera Rubin→Groq LPU. General hardware starts, specialized hardware takes hold. The Groq LPU showcased by Huang at this year’s GTC, released after the acquisition of Groq, is a deterministic data flow processor. Static compilation, compiler scheduling, no dynamic scheduling, 500MB on-chip SRAM—it is architecturally the ASIC of the reasoning domain. It does one thing, but it does it exceptionally well.
Interestingly, GPUs played key roles in both waves.
Around 2013, miners discovered that GPUs were better suited than CPUs for mining crypto tokens, and NVIDIA graphics cards were sold out. A decade later, researchers found that GPUs are the best tools for training and reasoning AI models, and NVIDIA’s data center cards were once again sold out. As a class of processors, GPUs have served two generations of the token economy.
The difference is that the first time NVIDIA was a passive beneficiary, and then that was it. In the second instance, when the main battlefield for AI computational consumption switched from pre-training to reasoning, NVIDIA quickly took the opportunity to actively design the entire game, becoming the shaper of AI game rules.
The World's Most Profitable Shovel
In the gold rush, the most profitable was not the gold miners, but Levi Strauss, who sold shovels. In the mining boom, the most profitable was not the miners, but Bitmain and Wu Jihan, who sold mining machines. In the AI pre-training and reasoning wave, the most profitable is not the foundational models and agents, but NVIDIA, which sells GPUs.
But to be honest, the roles of Bitmain and NVIDIA in their respective industries cannot be compared anymore.
Bitmain only sells mining machines, while NVIDIA was once Bitmain's supplier. Once you bought the mining machine, what coins to mine, which mining pool to go to, and at what price to sell are none of Bitmain's concern. It is merely a pure hardware supplier, profiting from one-time equipment sales.
NVIDIA is different. It doesn't just sell hardware; now, especially since the AI explosion on the reasoning side in 2025, it has deeply defined what should be mined with which GPUs, how to price the tokens, to whom the tokens should be sold, and how data centers should allocate computational power... All of this is in Huang's presentation PPT: he segmented the market into five tiers, detailing what models, context lengths, interaction speeds, and prices correspond to each tier... NVIDIA standardized and formatted the future AI reasoning-driven market.
Around 2018, global computational power was concentrated in several large mining pools—F2Pool, Antpool, BTC.com—competing for share, but the source of mining machines was highly concentrated in Bitmain.
Similar to today’s NVIDIA, 60% of its revenue comes from competing hyperscalers, such as AWS, Azure, GCP, Oracle, and CoreWeave, while 40% comes from decentralized AI natives, sovereign AI projects, and enterprise clients. The large "mining pools" contribute the main revenue, while smaller "miners" provide resilience and diversity.
The structures of the two ecosystems are exactly the same. But Bitmain later encountered competitors—Shenma Mining Machine, Chips In Motion, and Canaan Creative have all been eating into its market share. Mining machines are relatively simple ASIC designs, allowing followers an opportunity. However, shaking NVIDIA seems to have become increasingly difficult: 20 years of CUDA ecosystem, hundreds of millions of GPU installations, NVLink six generations of interconnect technology, Groq’s decoupled reasoning architecture after integration—NVIDIA's technological complexity and ecological barriers render most competitive tools ineffective.
This might need to last 20 years.
The Fundamental Fork Between Two Types of Tokens
What fundamentally differentiates cryptocurrency and AI training and reasoning tokens is the motivation and psychology behind their use.
The demand side for crypto tokens is speculative. No one "needs" Bitcoin to perform work. All the white papers claiming blockchain tokens can solve your problems are written by scammers. You hold crypto because you believe that someone will buy it from you at a higher price in the future. The value of Bitcoin comes from a self-fulfilling prophecy: enough people believe it has value, it has value. This is a faith economy.
In contrast, the demand side for AI tokens is productivity. Nestlé needs tokens for supply chain decision-making—its supply chain data refreshes from once every 15 minutes to every 3 minutes, reducing costs by 83%, and that value can be directly mapped to P&L. 100% of NVIDIA engineers already need tokens to write code instead of manually; research teams need tokens for research. You don’t need to believe tokens have value; you just need to use them, and the value proves itself through use.
This is the essential difference between the two types of tokens. Crypto tokens are produced to be held and traded—they derive their value from not being used. AI tokens are produced to be consumed immediately—they derive their value from the very moment they are used.
One is digital gold, becoming more valuable by holding; the other is digital electricity, consumed as soon as it is produced.
This distinction determines that the AI token economy will not bubble like the crypto token economy. Bitcoin significantly fluctuates because the prices of speculative products are driven by emotions. But the prices of tokens are driven by usage and production costs; as long as AI remains useful—provided people are still using Claude Code to write code, ChatGPT to write reports, and agents to run business processes, the demand for tokens will not crash. It does not rely on faith; it relies on necessity.
In 2008, the Bitcoin white paper needed to repeatedly justify why a decentralized electronic cash system had value. Seventeen years later, people are still debating.
In 2026, token economics did not trigger any debate; it even became a consensus without needing justification. When Huang stood on the GTC stage and said "tokens are the new commodity," no one questioned it. Because every single person in the audience had consumed millions of tokens using Claude Code or ChatGPT that morning. They did not need to be convinced that tokens had value— their credit card bills had already proven it.
In this sense, Huang is truly a copy of Satoshi, the one who monopolized mining machine production, defined the use cases and specifications of tokens, and annually hosted a show at San Jose's SAP Center to demonstrate how powerful the next generation of machines supporting AI training and reasoning could be.
Satoshi has a charm of cautious desire; he designed the rules, handed them over to code, and then disappeared. This is the romance of cypherpunks. Huang, however, is more like a businessman than any scientist; he designed the rules, personally maintained them, continuously added bricks, and solidified his moat.
The tokens you once had to believe in to see are now visible without belief. They are the next after watts, amps, and bits.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。