Title: Naive Ideas of Token Incentives and Market Trends Prevail Here
Author: Gagra
Translation: Deep Tide TechFlow
Abstract
This is not another optimistic venture capital article on the "AI + Web3" field. We are optimistic about the merger of these two technologies, but this article is a call to action. Otherwise, this optimism will eventually lose its rationale.
Why? Because developing and running the best AI models requires significant capital expenditure on cutting-edge and often hard-to-obtain hardware, as well as domain-specific research and development. Just as most Web3 artificial intelligence projects do, using cryptographic incentives for crowdsourcing is not enough to offset the hundreds of billions of dollars invested by large companies that firmly control artificial intelligence development. Given the limitations in hardware, this may be the first major software paradigm where smart and creative engineers outside the current organizations lack the resources to disrupt it.
The speed at which software "eats the world" is accelerating, and it will soon grow exponentially with the acceleration of artificial intelligence. Currently, all this "cake" flows to the tech giants, and end users, including governments and large enterprises, as well as consumers, increasingly rely on their power.
Misaligned Incentives
All of this is unfolding at the most inappropriate time—when 90% of decentralized network participants are busy chasing the easily obtainable huge profits brought by narrative-driven development. Yes, developers are following investors into our industry, rather than the other way around. The differences in motivations, from open acknowledgment to more subtle subconscious motives, vary, but the narratives and market forces surrounding them have driven a large part of Web3 decision-making. Participants are so immersed in reflexive bubbles that they cannot see the world outside, except to help drive the narrative further in this cycle. And AI is clearly the biggest, as it is also experiencing a boom.
We have communicated with dozens of teams in the intersection of AI x Crypto, and can confirm that many of these teams are very capable, driven by mission, and passionate about building projects. But human nature tends to succumb to temptations when faced with them, and then rationalize these choices afterwards.
Ease of liquidity has always been the historical curse of the crypto industry, which has slowed its development and delayed useful adoption for several years. It even leads the most loyal crypto believers to turn to "token speculation." The rationalization is that more capital holding tokens may give these builders a better chance.
The relative immaturity of institutional capital and retail capital provides builders with the opportunity to make unrealistic claims while still benefiting from valuations, as if these claims had already been realized. The result of these processes is actually leading to moral hazards and capital destruction, with few of these strategies being effective in the long term. Demand is the mother of all inventions, and when the demand disappears, the invention ceases to exist.
This situation could not have occurred at a worse time. While all the smartest tech entrepreneurs, national leaders, and small businesses are competing to ensure their benefit from the artificial intelligence revolution, crypto founders and investors have chosen "rapid growth." In our view, this is the real opportunity cost.
Web3 AI Market Overview
Given the aforementioned incentives, the classification of Web3 AI projects actually boils down to:
Legitimate (also divided into realists and idealists)
Semi-legitimate
Forgers
Basically, we believe that builders are well aware of the conditions required to stay in sync with their Web2 competitors and which verticals are likely to participate in the competition, and which are more like pipe dreams, but these can all be marketed to venture capitalists and an immature public.
The goal is to be able to compete at this moment. Otherwise, the speed of AI development may leave Web3 behind, and the world will move towards a Western corporate AI and a Chinese national AI anti-utopian Web4. Those who cannot quickly become competitive and rely on distributed technology to catch up over a longer period of time are too optimistic to be taken seriously.
Obviously, this is a very rough summary, and even the forgers group contains at least a few serious teams (perhaps more delusional ones). But this article is a call to action, so we do not intend to remain objective, but to urge readers to feel a sense of urgency.
Legitimate
Middleware for "chaining AI." The founders behind these solutions, although few, understand that it is currently impractical, if not impossible, to decentralize training or inference of the models that users actually want. Therefore, it is a good first step for them to connect the best centralized models to the on-chain environment to benefit from complex automation. Currently, it seems that hardware isolation zones that can provide API access points (TEE, i.e. "empty isolation" processors), bidirectional oracles (used for bidirectional indexing of on-chain and off-chain data), and verifiable off-chain computing environments for agents are the best solutions. There are also some co-processor architectures that use zero-knowledge proofs (ZKP) to snapshot state changes instead of verifying complete computations, which we also consider feasible in the medium term.
A more idealistic approach to the same problem attempts to verify off-chain inference to keep it consistent with on-chain computation in terms of trust assumptions. In our view, the goal of doing this should be to allow AI to perform on-chain and off-chain tasks in a single coherent runtime environment. However, most supporters of verifiable inference talk about unclear goals such as "trust model weights," which are not likely to be important in the coming years or ever. Recently, the founders in this camp have begun to explore alternative methods to verify inference, but initially all are based on ZKP. Although many smart teams are researching the so-called ZKML, they are taking too much risk in expecting cryptographic optimizations to surpass the complexity and computational requirements of AI models. Therefore, we believe they are currently not ready to compete. However, some recent progress is interesting and should not be overlooked.
Semi-legitimate
Consumer applications using wrappers for closed and open models (e.g., Stable Diffusion or Midjourney for image generation). Some of these teams are among the first in the market and have actual user appeal. Therefore, it is unfair to categorize them all as false, but only a few are deeply considering how to develop their base models in a decentralized manner and innovate in incentive design. There are some interesting changes in governance/ownership in this regard. But most projects in this category simply add a token to centralized wrappers like OpenAI API to obtain valuation premiums or provide faster liquidity for the team.
The problem that both of the above camps have not solved is the training and inference of large models in a decentralized environment. Currently, there is no way to train base models in a reasonable time without relying on tightly connected hardware clusters. Given the level of competition, "reasonable time" is a key factor.
Recently, there have been some promising research results, theoretically, methods such as differential data flow can be extended to a distributed computing network to increase its future capacity (as network capabilities match data flow requirements). However, competitive model training still requires communication between localized clusters (rather than a single distributed device) and cutting-edge computing power (retail GPUs are becoming increasingly uncompetitive).
Recently, there has also been progress in research on localizing inference by reducing model size (one of the two decentralized methods), but it has not been utilized in existing protocols in Web3.
The problem of decentralized training and inference logically leads us to the last and most important of the three camps, which emotionally triggers us the most.
Forgers
Infrastructure applications mainly focus on the decentralized server field, providing bare hardware or decentralized model training/hosting environments. There are also some software infrastructure projects that are driving protocols such as federated learning (decentralized model training) or merging software and hardware components into a single platform, where people can essentially train and deploy their decentralized models end-to-end. Most of them lack the complexity needed to actually solve the problem, and the naive idea of "token incentives + market trends" prevails here. The solutions we see in the public and private markets are unable to achieve meaningful competition at this moment. Some solutions may develop into viable (but niche) products, but what we need now are fresh and competitive solutions. This can only be achieved through innovative design to address the bottleneck of distributed computing. In training, not only speed is an issue, but also the verifiability of completed work and the coordination of training workloads, which adds to the bandwidth bottleneck.
We need a set of competitive and truly decentralized base models that require decentralized training and inference to be effective. If computers become intelligent and AI is centralized, there will be no world computer to speak of, except for some kind of anti-utopian version.
Training and inference are at the core of AI innovation. While other parts of the AI world are moving towards tighter architectures, Web3 needs orthogonal solutions to compete with them, as the feasibility of direct competition is becoming increasingly low.
Scale of the Problem
It all comes down to computing power. The more you put in, the better the results, whether in training or inference. Yes, there are some adjustments and optimizations, and computing itself is not homogeneous, with various new methods now available to overcome the bottleneck of traditional von Neumann architecture processing units. But ultimately, it all comes down to how much matrix multiplication you can do on how large a memory block and how fast the computations are.
That's why we see the so-called "hyper-scale operators" making such strong investments in data centers, all seeking to create a stack with AI model processors at the top and hardware to support it at the bottom: OpenAI (model) + Microsoft (computing), Anthropic (model) + AWS (computing), Google (both), and Meta (increasingly involved in both through doubling down on data center expansion). There are more nuances, dynamics of interaction, and participants, but we won't delve into that here. The overall situation is that hyper-scale operators are investing unprecedented tens of billions of dollars in data center expansion and creating synergies between their computing and AI products, expecting huge returns as AI becomes more prevalent in the global economy.
Let's just look at the expected expansion levels of these 4 companies this year:
Meta expects capital expenditures in 2024 to be between 30-37 billion USD, likely heavily skewed towards data centers.
Microsoft's capital expenditure in 2023 is around 11.5 billion USD, with rumors of an additional 40-50 billion USD investment in 24-25! Partial inferences can be made from the huge data center investments announced in several countries: 3.2 billion USD in the UK, 3.5 billion USD in Australia, 2.1 billion USD in Spain, 3.2 billion euros in Germany, 1 billion USD in Georgia, and 10 billion USD in Wisconsin. These are just some regional investments in their network of 300 data centers across over 60 regions. There are also rumors that Microsoft may spend an additional 100 billion USD to build a supercomputer for OpenAI!
Amazon's leadership expects a significant increase in capital expenditure in 2024, with 2023 expenditure at 48 billion USD, primarily due to the expansion of AWS infrastructure for AI.
Google spent 11 billion USD in the fourth quarter of 2023 alone to expand its servers and data centers. They acknowledge that these investments are to meet expected AI demand, and expect the pace and total amount of their infrastructure spending to significantly increase in 2024 due to AI.
Here is the amount NVIDIA has already spent on AI hardware in 2023:

NVIDIA's CEO Jensen Huang has been touting a trillion-dollar investment in AI acceleration over the next few years. He recently doubled this prediction to 2 trillion USD, reportedly due to the interest he sees from sovereign participants. Analysts at Altimeter expect global AI-related data center spending to be 160 billion USD in 2024 and over 200 billion USD in 2025.
Now, comparing these numbers to the figures provided to independent data center operators in Web 3 to incentivize them to expand capital expenditure on the latest AI hardware:
The total market value of all decentralized physical infrastructure (DePIn) projects is currently around 40 billion USD, which is relatively illiquid and mainly speculative tokens. Essentially, the market value of these networks equals the upper estimate of the total capital expenditure of their contributors, as they are building with token incentives. However, the current market value is almost useless, as it has already been issued.
So, let's assume an additional 80 billion USD (twice the current value) of private and public DePIn token market value as incentives to enter the market in the next 3-5 years, and assume this is entirely used for AI use cases.
Even if we divide this very rough estimate by 3 years and compare its USD value to the cash spent by hyper-scale operators in 2024 alone, it is clear that applying token incentive measures to a range of "decentralized GPU networks" projects is not enough.
Investors also need tens of billions of dollars in demand to absorb these tokens, as operators of these networks sell a large amount of such mined coins to pay for capital expenditure costs. More tens of billions of dollars are needed to drive up the value of these tokens and incentivize growth in construction to surpass hyper-scale operators.
However, for most people who are familiar with the current operation of Web3 servers, a large part of the "decentralized physical infrastructure" is actually running on the cloud services of these hyper-scale operators. Certainly, the surge in demand for GPUs and other AI-specialized hardware is also driving more supply, which should ultimately make it cheaper to rent or buy them from the cloud. At least that's the expectation.
But at the same time, consider this: NVIDIA now needs to prioritize providing the latest generation of GPUs to customers. At the same time, NVIDIA is also starting to compete with the largest cloud computing providers on its own turf, offering AI platform services to enterprise customers who are already locked into hyper-scale servers. This will ultimately force it to either build its own data centers over time (essentially eroding the hefty profits it currently enjoys, so it's unlikely), or significantly restrict the sale of its AI hardware to its network of cloud providers.
Additionally, NVIDIA's competitors are introducing additional AI-specific hardware, mostly using chips produced by TSMC, the same as NVIDIA. So basically all AI hardware companies are currently vying for TSMC's capacity. TSMC also needs to prioritize certain customers. Samsung and potentially Intel (the company is trying to quickly return to the forefront of chip manufacturing) may be able to absorb additional demand, but TSMC is currently producing most of the chips related to AI, and expanding and calibrating cutting-edge chip manufacturing (3 and 2 nanometers) takes several years.
Most importantly, all cutting-edge chip manufacturing is currently done by Taiwan's TSMC and South Korea's Samsung near the Taiwan Strait, and the risk of military conflict may become a reality before facilities offsetting this in the US (and not expected to produce next-generation chips in the next few years) are launched.
Finally, due to restrictions on NVIDIA and TSMC by the US, China is basically cut off from the latest generation of AI hardware, and China is vying for the remaining computing power, just like Web3 DePIn networks. Unlike Web3, Chinese companies actually have their own competitive models, especially large language models (LLMs) from companies like Baidu and Alibaba, which require a large amount of last-generation equipment to run.
Therefore, due to a combination of one or more of the above reasons, there is a non-trivial risk that hyper-scale cloud service providers may intensify the AI dominance war and prioritize their own AI hardware over external access by other cloud businesses. Essentially, this is a scenario where they occupy all cloud computing capacity related to AI for their own use and no longer make it available to anyone else, while also swallowing up all the latest hardware. After this happens, the remaining computing supply will be in higher demand by other large players (including sovereign nations). Meanwhile, consumer-grade GPUs are becoming increasingly less competitive.
Obviously, this is an extreme scenario, but the rewards are so great for the big players that even with hardware bottlenecks still in place, they won't back down. As a result, decentralized operators like tier-2 data centers and retail hardware owners (comprising the majority of Web3 DePIn providers) will be excluded from the competition.
The Other Side of the Coin
While cryptocurrency founders may not have realized it yet, AI giants are closely watching cryptocurrencies. Government pressure and competition may force them to adopt cryptocurrencies to avoid being shut down or heavily regulated.
The recent resignation of Stability AI's founder to start "decentralizing" his company is one of the earliest public hints of this. He had previously openly stated plans to launch tokens after a successful IPO, which somewhat reveals the authenticity of the expected motivation.
Similarly, although Sam Altman was not involved in the operation of Worldcoin, a cryptocurrency project he co-founded, its tokens do trade like proxies for OpenAI. Only time will tell if there is a path to connect the Free Internet Money project with AI research and development projects, but the Worldcoin team also seems to be aware that the market is testing this assumption.
For us, it makes sense that AI giants may explore different decentralized paths. The problem we see here is that Web3 has not yet put forward meaningful solutions. "Governance tokens" are largely a joke, and only tokens that explicitly avoid direct linkage between asset holders and their network development and operation, such as $BTC and $ETH, are currently truly decentralized.
The (lack of) incentive mechanisms that lead to slow technological development will also affect the development of different designs for managing encrypted networks. Startup teams are just slapping a "governance token" label on their products, hoping to find a solution, but ultimately getting caught up in resource allocation around the "governance theater".
Conclusion
The AI race is on, and everyone is taking it very seriously. We can't find any flaws in the thinking of the big tech companies; more computing means better AI, better AI means lower costs, increased new revenue, and expanded market share. For us, this means the bubble is rational, but all the scammers will still be cleared out in the inevitable shakeout.
Centralized large enterprise AI is dominating this field, and legitimate startups are finding it difficult to keep up. Web3 space joined the race late, but is also joining the competition. The market rewards for cryptocurrency AI projects are too lucrative, while the rewards for Web2 startups in this field are relatively less, leading to founders' interests shifting from delivering products to driving token appreciation at critical moments, and the window of opportunity to catch up is closing rapidly. So far, no orthogonal innovation has emerged here to compete without expanding computing scale.
There is a credible open-source movement around consumer-facing models, initially driven forward by some centralized players who choose to compete with larger closed-source rivals (such as Meta and Stability AI). But now the community is catching up and putting pressure on leading AI companies. These pressures will continue to affect closed-source development of AI products, but do not have a substantial impact when open-source catches up. This is another significant opportunity for the Web3 space, but the prerequisite is that it solves the problem of decentralized model training and inference.
Therefore, while the surface provides an opportunity for "classic" disruptors, the reality is far from it. AI is primarily associated with computing, and unless there is a breakthrough innovation in the next 3-5 years, this situation cannot be changed, which is crucial for determining who controls and guides AI development.
Even though demand is driving efforts on the supply side, the computing market itself cannot "blossom," and competition between manufacturers is constrained by structural factors such as chip manufacturing and economies of scale.
We are optimistic about human ingenuity and believe that there are enough smart and noble people to try to solve the problems in the field of AI in a way that benefits the free world rather than top-down corporate or government control. But the odds look very slim, at best it can be said to be a speculative game, and Web3 founders are busy with financial interests rather than the real-world impact.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。