Source | Hard·AI
Author | Chang Jiashuai
This year, generative AI has undoubtedly entered a "rapid development" stage.
ChatGPT, Midjourney, Wenxin Yiyuan, and other "consumer-level products" have brought AI into thousands of households; old-school tech giants like Adobe and Microsoft have "rejuvenated" themselves with AI; and NVIDIA, the "AI shovel seller" with soaring performance and market value, has become an absolute star in the capital market this year.
However, from the leading Microsoft and OpenAI to the rapidly advancing Google and Meta, most tech companies' AI products are still in the stage of losing money and making noise, and it's hard to say whether consumers will buy into it.
The unclear downstream prospects have raised a series of questions—
Why hoard so many GPUs? How much money needs to be made to break even? Who will ultimately foot the bill?
On September 20, David Cahn, a partner at the venture capital firm Sequoia, summarized these questions as the "200 billion dollar question for the AI industry" in an article.
David Cahn believes that in order to break even, the AI industry needs to achieve $200 billion in revenue, but currently there is still a gap of $125 billion…
Therefore, David Cahn believes that although hoarding a large amount of GPU computing power may be a good thing in the long run, in the short term, this may cause chaos.
The following is an excerpt from David Cahn's original article, enjoy~ ✌️
Since last summer, the generative AI wave has entered a super high-speed mode. The catalyst for this acceleration was Nvidia's Q2 profit guidance and its subsequent better-than-expected performance. This signaled to the market that the demand for GPU and AI model training is "insatiable."
Before Nvidia's announcement, consumer products like ChatGPT, Midjourney, and Stable Diffusion had already brought AI into the public eye. With Nvidia's impressive performance, founders and investors have obtained empirical evidence that AI can create tens of billions of dollars in new net income, prompting a full-speed charge forward in this field.
Although investors have speculated a lot from Nvidia's performance, AI investment is now proceeding at a rapid pace, with valuations reaching record highs. However, there is still an important question: what are all these GPUs used for? Who are the ultimate customers? How much value needs to be created to make this rapid investment profitable?
Consider the following scenario:
For every $1 spent on GPUs, there is approximately $1 in data center energy costs. In other words, if Nvidia can sell $50 billion worth of GPUs by the end of the year (according to conservative estimates by analysts), data center spending will reach $100 billion.
Furthermore, assuming that the end customers of GPUs, i.e., the companies using GPU applications, need to make a 50% profit in AI business in order to break even, it means that at least $200 billion in revenue is needed to recoup the initial investment. This doesn't even take into account the profits of cloud providers, and if they are to make money, the total revenue requirement should be even higher.
According to public documents, most of the incremental data center construction comes from major tech companies. For example, Google, Microsoft, and Meta have all reported significant increases in data center capital expenditures. According to reports, companies like ByteDance, Tencent, and Alibaba are also major customers of Nvidia. Looking ahead, companies like Amazon, Oracle, Apple, Tesla, and Coreweave may also spend heavily on data center construction.
An important question to ask is: how much of this capital expenditure construction is related to actual end customer demand, and how much is based on "anticipated demand"? This is a $200 billion question.
According to The Information, OpenAI's annual revenue is about $1 billion, and Microsoft has stated that products like Copilot are expected to bring in $10 billion in annual revenue. If we add in other companies: assuming that Meta and Apple can also earn $10 billion annually from AI, and companies like Oracle, ByteDance, Alibaba, Tencent, X, and Tesla can earn $5 billion from their AI businesses, the total is only $750 billion—
—These are all hypothetical scenarios, but the key point is that even if you make a huge profit from AI, at today's spending levels, you are still at least $125 billion short of breaking even.
There is a great opportunity for startups to fill this gap, and our goal is to "follow the GPUs" and find the next generation of startups that create real end customer value using AI technology—we hope to invest in these companies.
The goal of this analysis is to highlight the gap we see today.
The hype around AI has finally caught up with the breakthroughs in deep learning technology developed since 2017. This is good news. Major capital expenditure construction is taking place. This should significantly reduce the cost of AI development in the long term. Previously, you had to buy a server rack to build any application. Now, you can use public clouds at a lower cost.
Similarly, many AI companies today are using most of their venture capital for GPUs. As today's supply constraints give way to oversupply, the cost of running AI workloads will decrease. This should stimulate more product development. It should also attract more founders to start businesses in this field.
In the history of technology cycles, overbuilding of infrastructure often burns capital, but it also releases future innovation by lowering the marginal cost of new product development. We expect this pattern to be repeated in the field of artificial intelligence.
The lesson for startups is clear: as a community, we need to shift our thinking from infrastructure to end customer value. Satisfied customers are the basic requirement for every great company. In order for AI to have an impact, we need to find ways to use this new technology to improve people's lives. How can we transform these amazing innovations into products that customers use, love, and are willing to pay for every day?
The construction of AI infrastructure is underway. Infrastructure is no longer a problem. Many basic models are being developed—this is no longer a problem either. And today's AI tools are quite good.
Therefore, the $200 billion question is:
How do you plan to use this infrastructure? How will you use it to change people's lives?
This article is translated from:
https://www.sequoiacap.com/article/follow-the-gpus-perspective/?utm_source=bensbites&utm_medium=referral&utm_campaign=dall-e-3-image-generation-in-chatgpt
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。