Author: Edward Zitron
Article Translation: Block unicorn

If you are paying attention to the AI in the cryptocurrency industry, or the AI in the traditional internet, you need to seriously consider the future of this industry. The content of the article is quite long, so if you are not patient, you can leave immediately.
The content I wrote in this article is not to spread doubt or "criticism", but to calmly evaluate the situation we are in today and the possible outcomes of the current path. I believe that the artificial intelligence craze - more precisely, the generative AI craze - (as I have said before) is unsustainable and will eventually collapse. I am also concerned that this collapse may deal a devastating blow to major tech companies, seriously disrupt the entrepreneurial ecosystem, and further weaken public support for the tech industry.
The reason I am writing this article today is that it feels like the situation is changing rapidly, and multiple "omens of AI doomsday" have emerged: the "o1 (code name: strawberry)" model hastily launched by OpenAI is called "a big and stupid magic trick" (false fantasy); rumors that future models from OpenAI (and elsewhere) will increase in price; layoffs at Scale AI; and leadership leaving OpenAI. These are all signs that things are starting to fall apart.
Therefore, I believe it is necessary to explain the crisis of the current situation and why we have reached the stage of shattered illusions. I want to express my concern about the fragility of this movement and the excessive obsession and lack of direction in reaching this point, and I hope that some people can do better.
In addition - perhaps this is something I did not pay enough attention to before - I want to emphasize the human cost that the bursting of the AI bubble may bring. Whether it is Microsoft and Google (and other major generative AI supporters) gradually slowing down their investments, or by weakening company resources to support OpenAI and Anthropic (and their own generative AI projects), I believe the end result will be the same. I am worried that thousands of people will lose their jobs, and most of the tech industry will realize that the only thing that can grow forever is cancer.
This article will not have too much lighthearted content. I will paint a dark picture for you - not only for those big AI players, but also for the entire tech industry and its employees - and tell you why I think this chaotic and destructive ending is coming sooner than you imagine.
Continue reading and enter thinking mode.
How Can Generative AI Survive?
Currently, OpenAI - a nominally non-profit organization that may soon become for-profit - is in the process of raising a new round of financing with a valuation of at least $150 billion, expected to raise at least $6.5 billion, possibly up to $7 billion. This round of financing is led by Josh Kushner's Thrive Capital, and it is rumored that NVIDIA and Apple may also participate. As I have detailed before, OpenAI will have to continue to raise unprecedented amounts of capital to survive.
What's worse, according to Bloomberg, OpenAI is also trying to raise $5 billion in debt from banks in the form of "revolving credit lines," which usually come with higher interest rates.
"The Information" also reported that OpenAI is negotiating with MGX - an investment fund backed by the United Arab Emirates with $100 billion in funds - seeking investment in AI and semiconductor companies, and may also raise funds from the Abu Dhabi Investment Authority (ADIA). This is an extremely serious warning signal, as no one voluntarily seeks funding from the UAE or Saudi Arabia. You only turn to them for help when you need a large amount of money and are unsure if you can get it from elsewhere.
Note: As CNBC pointed out, one of MGX's founding partners, Mubadala, holds about $500 million in shares of Anthropic, which were acquired from FTX's bankruptcy assets. It can be imagined how "happy" Amazon and Google should be about this conflict of interest!
As I discussed at the end of July, OpenAI needs to raise at least $3 billion, and possibly up to $10 billion, to keep going. It is expected to lose $5 billion in 2024, and this number may continue to increase as more complex models require more computing resources and training data. Anthropic's CEO Dario Amodei predicts that future models may require training costs of up to $100 billion.
By the way, the "valuation of $150 billion" here refers to the way OpenAI prices the company's shares for investors - although the term "shares" is somewhat vague here. For example, in a regular company, investing $1.5 billion at a valuation of $150 billion would typically get you "1%" of the company, however in the case of OpenAI, things are much more complicated.
Earlier this year, OpenAI tried to raise funds at a valuation of $100 billion, but some investors hesitated due to concerns about the high valuation of generative AI companies, as reported by "The Information" reporters Kate Clark and Natasha Mascarenhas.
To complete this round of financing, OpenAI may transition from a non-profit organization to a for-profit entity, but the most confusing part is what investors actually get. Kate Clark of "The Information" reported that investors participating in this round of financing were told (quoting verbatim) "they won't get traditional equity for their investment… instead, they get units that promise a share of the company's profits - once the company becomes profitable, they can share in the profits."
It is not clear whether the transition to a for-profit entity can solve this problem, because OpenAI's strange "non-profit organization + for-profit division" company structure means that as part of the 2023 investment, Microsoft has the right to 75% of OpenAI's profits - although theoretically, the transition to a for-profit structure may include equity. However, when you invest in OpenAI, you get "profit sharing units" (PPU) instead of equity. As Jack Raines wrote in Sherwood, "If you own OpenAI's PPU, but the company has never been profitable, and you can't sell it to someone who thinks OpenAI will eventually be profitable, then your PPU is worthless."
Last weekend, Reuters published a report stating that any valuation of $150 billion will "depend on" whether OpenAI can restructure its entire company and remove the cap on investor profits during this process, which is currently limited to 100 times the original investment. This profit cap was set in 2019, when OpenAI said that any profits beyond this cap would be "returned to the non-profit organization for the benefit of humanity." In recent years, the company has modified this rule, allowing the profit cap to be increased by 20% annually starting in 2025.
Given OpenAI's existing profit-sharing agreement with Microsoft - not to mention its deep-seated massive losses - any return is theoretically at best. Taking a reckless risk, even if it increases by 500%, zero plus anything is still zero in the end.
Reuters also added that any transition to a for-profit structure (thereby raising its valuation above the recent $80 billion) would force OpenAI to renegotiate with existing investors, as their shares would be diluted.
According to the Financial Times, investors must "sign an operating agreement that states: 'Any investment in [OpenAI's for-profit subsidiaries] should be viewed in the spirit of a donation,' and that OpenAI 'may never be profitable'." Such terms are indeed very crazy, and anyone who invests in OpenAI and suffers bad consequences as a result is entirely self-inflicted, because it is an extremely absurd investment.
In fact, investors do not get a stake in OpenAI, or any control over OpenAI, but only a share of the future profits of a company that loses more than $5 billion annually, and is likely to lose more in 2025 (if it can last that long).
OpenAI's models and products - we will discuss their practicality later - are extremely unprofitable to operate. "The Information" reported that OpenAI will pay Microsoft about $4 billion in 2024 to support ChatGPT and its underlying models, and this is already at a discounted rate of $1.30 per GPU per hour provided by Microsoft, compared to the regular cost of $3.40 to $4 per hour for other customers. This means that without deep cooperation with Microsoft, OpenAI's annual server expenses could be as high as $6 billion - not including other expenses such as employee costs (about $1.5 billion per year). As I have discussed before, training costs are currently $3 billion per year and are almost certain to continue to increase.
Although "The Information" reported in July that OpenAI's annual revenue is between $3.5 billion and $4.5 billion, "The New York Times" reported last week that OpenAI's annual revenue "has now exceeded $2 billion", which means that the year-end data is likely to be close to the lower end of that estimate.
In short, OpenAI is "burning money" and will only burn more in the future, and in order to continue burning money, it will have to raise funds from investors who have signed a statement that "we may never be profitable".
As I have written before, another problem with OpenAI is that generative AI (expanded to GPT models and ChatGPT products) has not solved the complex problems that can prove its huge costs. These models are based on probabilities, which leads to huge and difficult-to-solve problems - in other words, they don't know anything and just generate answers (or generate images, translations, or summaries) based on training data, and model developers are rapidly depleting this training data.
The "illusion" phenomenon - that the model clearly generates unreal information (or generates what looks like incorrect content in images or videos) - cannot be completely solved with existing mathematical tools. Although it may reduce or mitigate the illusion phenomenon, its existence makes generative AI difficult to truly rely on in critical business applications.
Even if generative AI can solve technical problems, it is not clear whether it truly brings value to the business. "The Information" reported last week that customers of Microsoft 365 suite (including Word, Excel, PowerPoint, and Outlook, especially many enterprise software packages, the latter is closely related to Microsoft's consulting services) have almost not adopted its AI-driven "Copilot" product. Only 0.1% to 1% of the 4.4 million users (each paying $30 to $50) pay for these features. A company testing AI features said, "Most people currently don't think it's of great value." Other companies said, "Many enterprises have not seen breakthroughs in productivity and other aspects," and they "are not sure when they will."
So, how much does Microsoft charge for these unimportant features? It is astonishing that each user has to pay an additional $30 per month, or up to $50 per month for the "sales assistant" feature. This actually requires customers to pay twice as much on top of the original cost - by the way, this is for an annual contract! - and these products don't seem to be that useful.
It should be noted: Microsoft's issues are so complex that they may require dedicated news content to discuss in the future.
This is the current state of generative AI - leading companies in productivity and business software cannot find a customer willing to pay for a product, partly because the results are too mediocre, and partly because the costs are too high to prove their reasonableness. If Microsoft needs to charge such high fees, it is either because Satya Nadella hopes to achieve $50 billion in revenue by 2030 (this goal was revealed in a memo at a public hearing on Microsoft's acquisition of Activision Blizzard), or because the costs are too high to lower prices, or both.
However, almost everyone is emphasizing that the future of AI will be astonishing - the next generation of large language models is just around the corner, and they will be amazing.
Last week, we truly caught a glimpse of that so-called "future". However, the results were disappointing.
A Stupid Magic Trick
OpenAI released o1 - codenamed "strawberry" - on Thursday evening, with the excitement level as dull as going to the dentist. Sam Altman described o1 as OpenAI's "most powerful and aligned model" in a series of tweets. Although he admitted that o1 "still has flaws, is still limited, and its performance is not as impressive after using it for a while as it was at first," he promised that o1 would provide more accurate results when dealing with tasks that have clear correct answers (such as programming, math problems, or scientific questions).
This in itself is very revealing - but we will discuss it in detail later. First, let's talk about how it actually works. I will introduce some new concepts, but I guarantee that I will not delve into too complex details. If you really want to read OpenAI's explanation, you can find it in their official article - "Learning to Reason with LLMs".
When faced with a problem, o1 breaks it down into individual steps - hoping that these steps will eventually lead to the correct answer. This process is called "Chain of Thought". It is easier to understand o1 as two parts of the same model.
In each step, one part of the model applies reinforcement learning, and the other part (the output part) "rewards" or "punishes" based on the correctness of its progress (its "reasoning" steps), and adjusts its strategy when punished. This is different from the way other large language models work, as the model generates output and then looks back, rather than just generating an answer and giving it directly, and it ignores or acknowledges 'good' steps to arrive at the final answer.
Although this sounds like a major breakthrough, and even a step towards highly praised artificial general intelligence (AGI) - it is not. This can be seen from OpenAI's choice to release o1 as an independent product, rather than an updated version of GPT. The examples shown by OpenAI - such as math and science problems - are tasks where the answers can be known in advance, and the answers to these tasks are either correct or incorrect, allowing the model to guide the "Chain of Thought" at each step.
You will notice that OpenAI did not demonstrate how the o1 model solves complex problems with unknown answers, whether they are math problems or other problems. OpenAI itself also admits that it has received feedback that o1 is more prone to "illusions" than GPT-4o, and o1 is less willing to admit that it has no answer compared to previous models. This is because, although there is a part of the model responsible for checking its output, this "checking" part also experiences illusions (sometimes AI fabricates seemingly reasonable answers, creating illusions).
According to OpenAI, due to the "Chain of Thought" mechanism, o1 is also more convincing to human users. Because o1 provides more detailed answers, people are more inclined to believe its output, even if these answers are completely wrong.
If you think my criticism of OpenAI is too harsh, consider how the company is promoting o1. It describes the reinforcement training process as "thinking" and "reasoning", but in reality, it is just guessing, and each step is guessing whether it is correct, and the final result is often known in advance.
This is an insult to humans - true thinkers. Human thinking is based on a series of complex factors: from personal experience and lifelong accumulated knowledge to chemical reactions in the brain. Although we also "guess" certain steps when dealing with complex problems, our guesses are based on specific facts, not as clumsy mathematical operations as o1.
And, my goodness, it comes at a hefty cost.
The pricing of o1-preview is $15 per million input tokens, and $60 per output token. In other words, the input cost of o1 is three times that of GPT-4o, and the output cost is four times. However, there is a hidden cost in this. Data scientist Max Woolf pointed out that OpenAI's "reasoning tokens" - the output content used to arrive at the final answer - are invisible in the API. This means that not only is the price of o1 higher, but its product nature also requires users to pay fees more frequently. All content generated for "considering" answers (it needs to be clear that this model is not "thinking") will also be charged, making the answers to complex problems such as programming extremely expensive.
Now let's talk about accuracy. On Hacker News - a site similar to Reddit, owned by Y Combinator, a company founded by Sam Altman - there are complaints that o1 "fabricates" non-existent libraries and functions when handling programming tasks, and makes mistakes when answering questions that are not easily found online.
On Twitter, startup founder and former game developer Henrik Kniberg asked o1 to write a Python program to calculate the product of two numbers and predict the output of the program. Although o1 correctly wrote the code (although it could be more concise, requiring only one line), the actual output was completely wrong. AI company founder Karthik Kannan also conducted a programming task test, and o1 "fabricated" a command that does not exist in an API.
Another user, Sasha Yanshin, tried to play chess with o1, and o1 "created" a chess piece out of thin air on the board, and then lost the game.
Being a bit mischievous, I also tried asking o1 to list the states with the letter "A" in their names. It took eighteen seconds of thought, and it gave 37 state names, including Mississippi. The correct answer should be 36 states.
When I asked it to list the states with the letter "W" in their names, it pondered for eleven seconds and included North Carolina and North Dakota, which was unexpected.
I also asked o1 how many times the letter "R" appears in its codename "Strawberry", and it answered two.
OpenAI claims that o1 performs at a level comparable to a doctoral student in complex benchmark tests such as physics, chemistry, and biology. However, it is evident that it performs poorly in geography, basic English language tests, mathematics, and programming.
It is worth noting that this is exactly the "big and stupid magic trick" that I predicted in previous communications. OpenAI launched "Strawberry" just to prove to investors and the public that the AI revolution is still ongoing, but what was actually launched is a cumbersome, uninteresting, and expensive model.
What's worse is that it's hard to explain why anyone should care about o1. Although Sam Altman may boast about its "reasoning abilities," what those who continue to fund him see is waiting times of 10 to 20 seconds, issues with basic factual accuracy, and a lack of any exciting new features.
No one cares about "better" answers anymore - they want something new, and I don't think OpenAI knows how to deliver that. Altman is trying to humanize o1 by making it "think" and "reason," which clearly implies that it is a step towards artificial general intelligence (AGI), but even the most ardent AI supporters find it hard to get excited.
In fact, I believe that o1 shows that OpenAI is both desperate and lacking in creativity.
Prices have not dropped, software has not become more useful, and the "next generation" models we have been hearing about since November last year have turned out to be a failure. These models also urgently need training data, to the point that almost every large language model has absorbed some form of copyrighted content. This urgency led Runway, one of the largest generative video companies, to launch a "company-wide effort" to collect thousands of YouTube videos and pirated content to train its models, and in August, a federal lawsuit accused NVIDIA of similar practices with many creators to train its "Cosmos" AI software.
The current legal strategy is basically holding on by sheer will, hoping that these lawsuits will not set any legal precedents that could define training these models as copyright infringement - this is the conclusion of a recent interdisciplinary study initiated by copyright advocates.
These lawsuits are progressing, with a judge in August approving further copyright infringement claims against Stability AI and DeviantArt (which used these models), as well as approving copyright and trademark infringement claims against Midjourney. If any of these lawsuits are successful, it will be a catastrophic blow to OpenAI and Anthropic, and even more so to Google and Meta, who use data sets of millions of artists' works, because it is almost impossible for AI models to "forget" training data, meaning they will need to be retrained from scratch, costing billions of dollars and greatly reducing their efficiency in performing tasks, which they are not particularly good at in the first place.
I am deeply concerned that the foundation of this industry is like a sandcastle on the beach. Large-scale language models like ChatGPT, Claude, Gemini, and Llama are unsustainable, with seemingly no path to profitability, as the compute-intensive nature of generative AI determines that training them costs billions or even tens of billions of dollars, and they require such a large amount of training data that these companies are essentially stealing data from millions of artists and writers, and hoping to escape legal repercussions.
Even if we set aside these issues, generative AI and its related architectures do not seem to bring any revolutionary breakthroughs, and the hype cycle about generative AI does not truly align with the meaning of the term "artificial intelligence". Generative AI at best occasionally generates some content, summarizes documents, or conducts research at an uncertain "faster" pace. Microsoft's Copilot for Microsoft 365 claims to have "thousands of skills" and offers "infinite possibilities" for businesses, but the examples it showcases are nothing more than generating or summarizing emails, "starting presentations with prompts," and querying Excel spreadsheets - these features may be useful, but they are far from revolutionary.
We are not in the "early stages". Since November 2022, capital expenditures and investments by large tech companies in infrastructure and emerging AI startups have exceeded $150 billion, and they have also invested in their own models. OpenAI has raised $13 billion and can hire anyone they want, and Anthropic is the same.
However, the result of this industry-wide "Marshall Plan" for generative AI's takeoff has only produced four or five almost identical large language models, the world's least profitable startups, and thousands of expensive but mediocre integrated applications.
Generative AI is being marketed with multiple lies:
1. It is artificial intelligence. 2. It will get better. 3. It will become true artificial intelligence. 4. It is unstoppable.
Setting aside terms like "performance" - which are often used to describe the "accuracy" or "speed" of generated content, rather than skill level - large language models have actually entered a plateau. The so-called "more powerful" often does not mean "able to do more things," but rather "more expensive," which means you have just created something that costs more but does not add any functionality.
If the combined might of every venture capitalist and major tech giant still hasn't found a truly meaningful use case that many people are willing to pay for, it means that no new use cases will emerge. Large language models - yes, that's where these billions of dollars are going - will not suddenly become more capable just because tech giants and OpenAI are pouring in another $150 billion. No one is trying to make these things more efficient, or at least no one has succeeded in doing so. If someone did succeed, they would be shouting it from the rooftops.
What we are facing is a collective delusion - a dead-end technology based on copyright theft (this problem arises with the emergence of every era's technology, it is unavoidable), which requires continuous capital to sustain, and the services provided are at best optional, disguised as some form of automation that is not actually provided, costing billions of dollars and will continue to do so. Generative AI runs not on money (or cloud computing credits), but on confidence. The problem is that confidence - like investment capital - is a finite resource.
I am concerned that we may be in an AI crisis similar to the subprime mortgage crisis - thousands of companies are integrating generative AI into their businesses, but prices are far from stable, and profitability is even further away.
Almost every startup that boasts being "AI-driven" is based on some combination of GPT or Claude. These models are developed by two companies deep in losses (Anthropic is expected to lose $2.7 billion this year), and their pricing strategy aims to attract more customers rather than make a profit. As mentioned earlier, OpenAI relies on Microsoft's funding - including the "cloud computing credits" it receives and the discounted pricing provided by Microsoft - its pricing depends entirely on Microsoft's continued support as an investor and service provider, and Anthropic's deals with Amazon and Google also face similar issues.
Based on their losses, I speculate that if the pricing of OpenAI or Anthropic is close to the actual cost, the price of API calls may increase ten to a hundred times, although there is no actual data to accurately show this. But we can consider the numbers reported by "The Information," OpenAI is expected to spend $4 billion on Microsoft's servers in 2024 - I should add that this is half the price Microsoft charges other customers - plus OpenAI still loses over $5 billion annually.
OpenAI most likely only charges a small portion of the cost required to run its models, and it can only maintain this situation by continuously raising more risk capital than before and continuing to receive discounted pricing from Microsoft, and Microsoft recently stated that it sees OpenAI as a competitor. Although it cannot be certain, it is reasonable to assume that Anthropic also receives similar discounted pricing from Amazon Web Services and Google Cloud.
Assuming Microsoft gave OpenAI $10 billion in cloud computing credits, and OpenAI spent $4 billion on server costs, plus the assumed $2 billion training costs - these costs will definitely increase after the launch of the new o1 and "Orion" models - then OpenAI may need more credits by 2025, or start paying Microsoft in actual cash.
While Microsoft, Amazon, and Google may continue to offer discounted pricing, the issue is whether these deals are profitable for them. As we saw after Microsoft's latest quarterly earnings report, investors are increasingly concerned about the capital expenditures (CapEx) required to build generative AI infrastructure, and many are skeptical about the potential profitability of this technology.
What we truly do not know is the profitability of generative AI for these large tech companies, as they calculate these costs into other revenues. While we cannot be certain, I think if these businesses were profitable in any way, they would definitely talk about the income they are making from it, but they are not.
The market is extremely skeptical of the prosperity of generative AI, and NVIDIA CEO Jensen Huang's lack of a substantive answer on the return on investment in AI led to a $279 billion drop in NVIDIA's market value in a single day. This is the largest stock market crash in U.S. market history, with the total value lost being equivalent to nearly five Lehman Brothers at their peak. Although this comparison ends here - NVIDIA is not even facing the risk of failure, and even if it did, the systemic impact would not be so severe - it is still an astonishing amount, and it shows the distorting power of AI on the market.
In early August, Microsoft, Amazon, and Google all suffered market turmoil due to their large-scale capital expenditures related to AI. If they cannot demonstrate significant revenue growth from the new data centers and NVIDIA GPUs invested in with $150 billion (or more) in the next quarter, they will face more pressure.
It is important to remember that, apart from AI, large tech companies have no other creative markets. When companies like Microsoft and Amazon start showing signs of slowing growth, they also start eager to show the market that they still have competitiveness. Google, a multi-risk monopoly company that relies almost entirely on search and advertising, also needs something new and attention-grabbing to attract investor attention - however, these products do not bring enough utility, and it seems that most of the revenue comes from companies that "try" AI and then find it not worth it.
Currently, there are two possibilities:
1. Large tech companies realize they are deeply involved and, out of fear of displeasing Wall Street, choose to reduce capital expenditures related to AI.
2. Large tech companies, in order to find new growth points, decide to cut costs to maintain their disruptive operations, lay off employees, and transfer funds from other businesses to support the "death race" of generative AI.
It is not clear which scenario will occur. If large tech companies accept that generative AI is not the future, they actually have nothing else to show to Wall Street, but they may adopt a "year of efficiency" strategy similar to Meta, reducing capital expenditures (and layoffs) while promising to "reduce investment" to a certain extent. This is the most likely path for Amazon and Google, as although they are eager to please Wall Street, they still have their profitable monopoly businesses to rely on.
However, actual revenue growth from AI brought in over the next few quarters needs to be seen, and it must be substantial, not vague statements about AI being a "mature market" or "annualized growth rates." If capital expenditures increase as a result, this actual contribution will need to be significantly higher.
I don't think this growth will happen. Whether in the third and fourth quarters of 2024, or the first quarter of 2025, Wall Street will start punishing large tech companies for their greed for AI, and this punishment will be more severe than that for NVIDIA, despite Jensen Huang's empty words and useless slogans, NVIDIA is the only company that can actually demonstrate how AI increases revenue.
I am somewhat concerned that the second scenario is more likely: these companies are deeply convinced that "AI is the future," their culture is completely out of touch with software development that solves real-world problems, and they may burn down the entire company. I am deeply concerned that large-scale layoffs will be used to fund this movement, and the situation over the past few years makes me doubt that they will make the right choices and move away from AI.
Large tech companies have been thoroughly poisoned by management consultants - Amazon, Microsoft, and Google are all managed by MBAs - and they are also surrounded by some similar monsters, such as Google's Prabhakar Raghavan, who drove away the people who actually built Google Search in order to control it himself.
These people do not truly address human problems, they have created a culture focused on solving fictional problems that software can fix. For those who spend their entire lives in meetings or reading emails, generative AI may seem somewhat magical. I think Satya Nadella's (Microsoft CEO) successful mindset is mainly "let the engineers solve the problems." Sundar Pichai could have ended the entire generative AI craze by simply mocking Microsoft's investment in OpenAI - but he didn't, because these people have no actual ideas, and these companies are not managed by people who have experienced problems, let alone those who truly know how to solve them.
They are also desperate, and this situation has never been so serious for them, apart from Meta burning billions in the metaverse. However, this situation is more serious and ugly, because they have invested a lot of money and tightly bound AI to their companies, removing AI will be embarrassing and damaging to their stocks, and it is actually an acquiescence to all of this.
If the media truly held them accountable, all of this could have stopped earlier. This narrative is being sold through the same scams as in previous hype cycles, the media assumes that these companies will "solve problems," even though it is clear that they will not. Do you think I am being pessimistic? Then tell me, what are the plans for generative AI next? What will it do next? If your answer is that they will "solve problems," or they have "amazing things behind the scenes," then you are an unwitting participant in marketing operations (you can think about this statement).
The author's voiceover: We really need to stop being fooled by this thing. When Mark Zuckerberg claims we are about to enter the metaverse, a large number of media outlets - such as The New York Times, The Verge, CBS News, and CNN - all collaborated to promote a concept that is obviously flawed, looks terrible, and is based on a complete lie about the future. It is clearly just a bad virtual reality world, but The Wall Street Journal still called it the "future vision of the internet" six months after the hype cycle was clearly outdated. The same thing happened with cryptocurrency, Web3, and NFTs! The Verge, The New York Times, CNN, CBS News - these media outlets once again participated in promoting obviously useless technologies - I should specifically mention The Verge, it's actually Casey Newton, who, despite his good reputation, claimed in July that "having the most powerful large language model could provide a variety of profitable product foundations for companies," when in reality, this technology will only lose money and has not yet provided any truly useful and lasting products.
I believe that at least Microsoft will start reducing costs in other business areas to help sustain the AI craze. Earlier this year, a source shared with me an email in which the Microsoft senior leadership team requested (but ultimately shelved the plan) to reduce the power demand in multiple areas within the company to free up power for GPUs, including moving the computation of other services to other countries to release the computing power for AI.
On the Microsoft board on the anonymous social network Blind (requires company email verification), a Microsoft employee complained in mid-December 2023 that "AI is eating up their money," stating that "the cost of AI is too high, swallowing up raises, and the situation won't get better." Another employee shared their anxiety in mid-July, stating that they clearly felt that Microsoft's "cost-cutting to fund NVIDIA's stock price operation cash flow" was "marginally addictive," and that this practice "deeply hurt Microsoft's culture."
Another employee added that they believe "Copilot will ruin Microsoft in the 2025 fiscal year," and that "the focus on Copilot in the 2025 fiscal year will be significantly reduced," also revealing that they know "large Copilot transactions in their country, after nearly a year of PoC, layoffs, and adjustments, have a utilization rate of less than 20%," and stating that "the company is taking on too much risk," and Microsoft's "huge AI investment will not pay off."
While Blind is anonymous, it is hard to ignore such facts: a large number of online posts tell of cultural issues at Microsoft Redmond (the name of a city in Washington state), especially the disconnect between senior leadership and actual work, only providing funding for projects labeled with AI. Many posts express disappointment in Satya Nadella, Microsoft's CEO, for his "incoherent rhetoric," and complain about the lack of bonuses and promotion opportunities in an organization focused on chasing a possibly non-existent AI craze.
At the very least, there is deep cultural sadness within the company, with many posts expressing "I don't like working here," and a sense of confusion about why the company is investing so much in AI, while feeling that they have to accept it because Satya Nadella doesn't care at all.
The Information's article mentioned a concerning issue with the actual usage rate of Microsoft's AI feature Office Copilot: Microsoft has reserved enough server capacity for 365 Copilot in its data centers to handle the daily usage of millions of users. However, it is not clear how much of this capacity is actually being used.
According to estimates, the current user base for Microsoft's Office Copilot feature may be between 400,000 and 4 million, which means that Microsoft may have built a large amount of idle infrastructure that is not being fully utilized.
While some may argue that Microsoft is positioning itself for future growth in this product category, another possibility worth considering is: what if this growth never materializes? If - although it sounds a bit crazy - Microsoft, Google, and Amazon have built these massive data centers to capture demand that may never come? As early as March of this year, I raised a point: I can't find any company that can achieve significant revenue growth through generative AI. And nearly six months later, this question still remains. The current approach of large companies seems to be to attach AI features to existing products, hoping to increase sales in this way, but there is no sign of success anywhere. Just like Microsoft, the "AI upgrade" they launched seems to have not brought any actual business value to enterprises.
This raises a bigger question: are these AI investments sustainable? Have the tech giants overestimated the demand for AI tools?
While some companies may have driven part of Microsoft Azure, Amazon AWS, and Google Cloud's spending through "integrating AI," I assume that this demand is largely driven by investor sentiment. These companies "invest in AI" more to satisfy the market than based on cost/benefit analysis or actual utility.
However, these companies have spent a lot of time and money embedding generative AI features into their products, and I believe they may face the following scenarios:
These companies develop and launch AI features, but find that customers are not willing to pay for them, as is the case with Microsoft's 365 Copilot. If they cannot find ways to make customers pay for these extra features now - in the midst of the AI craze - the situation will only get worse when the hype dies down and bosses no longer demand employees to "catch up with the AI trend."
These companies develop and launch AI features, but cannot find ways to make users pay extra for these features, meaning they can only embed AI features into existing products without increasing profit margins. Ultimately, AI features may become a "parasite," eroding the company's revenue.
Goldman Sachs' Jim Covello also mentioned in a report on generative AI that if the benefits of AI are only to improve efficiency (such as being able to analyze documents faster), then competitors can also do that. Almost all integrations of generative AI are similar: some form of collaborative assistant, used to answer customer or internal questions (such as Salesforce, Microsoft, Box), content creation (Box, IBM), code generation (Cognizant, Github Copilot), and the upcoming "intelligent agents," which are essentially "customizable chatbots that can connect to other parts of the website."
This problem reveals one of the biggest challenges of generative AI: while it is "powerful" to some extent, this power is more reflected in "generating content based on existing data," rather than true "intelligence." This is also why many companies' websites are filled with empty words about AI, because their biggest selling point is actually "uh… figure it out for yourself!"
I am concerned about a chain reaction. I believe that many companies are currently "trying out" AI, and once these experiments end (according to Gartner's prediction, by the end of 2025, 30% of generative AI projects will be abandoned after the proof of concept stage), they are likely to stop paying for these additional features, or stop integrating generative AI into their company's products.
If this happens, the already struggling revenues of the super-scale enterprises providing cloud computing for generative AI applications, and large language model suppliers like OpenAI and Anthropic, will further decrease. This may put greater pressure on the prices of these companies, as their already loss-making profit margins will deteriorate further. By then, OpenAI and Anthropic will almost certainly have to raise prices, if they haven't already.
While the tech giants can continue to fund this craze - after all, it is largely driven by them - this will not help small startups that have become accustomed to discounted prices, as they will be unable to continue operating. Although there are some cheaper alternatives, such as independent suppliers running Meta's LLaMA model, it is hard to believe that they will not face the same profitability issues as the super-scale enterprises.
It is also worth noting that the super-scale enterprises are also very afraid of upsetting Wall Street. While they theoretically can (as I fear) improve profit margins through layoffs and other cost-cutting measures, these are only short-term solutions, and will only work to some extent if they can shake some money from this barren generative AI tree.
Regardless, it's time to accept a fact: the money is not here. We need to stop and examine the third illusion era we are currently in the tech industry. However, unlike with cryptocurrency and the metaverse, this time everyone is involved in this money-burning frenzy, pursuing a project that is unsustainable, unreliable, unprofitable, and environmentally harmful. This project is packaged as "artificial intelligence" and promoted as "automating everything," but in reality, it has never had a path to truly achieve this goal.
Why does this situation keep happening? Why have we gone through cryptocurrency, the metaverse, and now generative AI, technologies that don't seem to be truly designed for ordinary people?
This is actually the natural result of the tech industry, which is now completely focused on increasing the value extracted from each customer, rather than providing more value to the customers. Or rather, they don't even truly understand who their customers are and what they need.
Today, the products being marketed to you will almost certainly try to bind you to some ecosystem - at least as a consumer, controlled by Microsoft, Apple, Amazon, or Google. As a result, the cost of leaving this ecosystem becomes increasingly high. Even cryptocurrency - ostensibly a "decentralized" technology - quickly abandoned the idea of laissez-faire and instead gathered users through a few major platforms (such as Coinbase, OpenSea, Blur, or Uniswap), often backed by the same venture capital firms (such as Andreessen Horowitz). Cryptocurrency did not become a standard-bearer for a new, completely independent online economic system, but instead could only expand through the connections and funding of those who had previously funded other internet waves.
As for the metaverse, it is a scam, but it is also Mark Zuckerberg's attempt to control the next generation of the internet, hoping to build "Horizon" as the main platform. We'll discuss generative AI later.
All of this is related to further monetization - increasing the average value of each customer, whether by getting them to use the platform more to display more ads, promoting "semi-useful" new features, or creating a new monopoly or oligopoly market, in which only the tech giants with massive reserves of capital can participate, while providing very little actual value or utility to the customers.
The reason generative AI is exciting (at least to some) is because the tech giants see it as the next important money-making tool - adding revenue streams to every product, from consumer tech to enterprise services. Most generative computing goes through OpenAI or Anthropic, and then flows back to Microsoft, Amazon, or Google, generating cloud computing revenue to sustain their growth. The biggest innovation here is not what generative AI can do, but rather creating an ecosystem of hopelessness, completely dependent on a few super-scale companies.
Generative AI may not be very practical, but it is very easy to integrate into various products, allowing companies to charge for these "new features." Whether it's consumer apps or services for enterprise software companies, these products can generate millions or even billions of dollars in revenue by charging as many customers as possible.
Sam Altman is very clever, he realized that the tech industry needed a "new thing" - a new technology that everyone can get a piece of and sell. Although he may not fully understand the technology, he does understand the economic system's desire for growth, and has productized generative AI based on the Transformer architecture as a "magical tool" that can be easily inserted into most products, bringing some unique functionality.
However, the frenzy to integrate generative AI everywhere reveals a huge disconnect between these companies and the actual consumer needs or effective operational business. For the past 20 years, simply "doing something new" seemed to work - launching new features and forcing the sales team to push them was enough to sustain growth. This has led the leaders of the tech industry into a harmful and unprofitable business model.
The top management of these companies - mostly MBAs and management consultants who have never built products or tech companies from scratch - either do not understand or do not care about the unprofitable path of generative AI, perhaps thinking it will naturally become profitable like Amazon Web Services (AWS) did (AWS took 9 years to become profitable), even though these are completely different things. Things used to "naturally work out," so why wouldn't they now?
Of course, in addition to the significant rise in interest rates changing the venture capital market, reducing VC reserves, and shrinking fund sizes, the attitude towards tech has never been so negative. Combined with many other factors - why 2024 is so different from 2014 - these reasons are too numerous to discuss in this 8000-word article.
What is truly concerning is that, aside from AI, many of these companies seem to have no other new products. What else do they have? What else can keep their companies growing? What other choices do they have?
None, they have nothing else. That's the problem, because once AI fails, its impact will inevitably spread to the rest of the tech industry.
Every major tech player - whether in consumer or enterprise domains - is selling some kind of AI product, integrating large language models or their own models, often running cloud computing on the systems of the big tech companies. To some extent, these companies all rely on the willingness of the big tech companies to subsidize the entire industry.
I speculate that a subprime-style AI crisis is brewing, in which almost the entire tech industry has participated in a technology sold at extremely low prices, highly concentrated and subsidized by the big tech companies. At some point, the astonishing and harmful burning rate of generative AI will catch up with them, leading to price increases, or companies releasing new products and features with extremely stringent charges - such as Salesforce's "Agentforce" product charging $2 per conversation - making it impossible for even well-funded enterprise customers to justify these expenses.
What happens when the entire tech industry relies on software that only loses money and has very little actual value? What happens when the pressure becomes too great, these AI products become irreconcilable, and these companies have nothing else to sell?
I really don't know, but the tech industry is heading towards a terrifying reckoning, a state of lack of creativity brought about by an economic environment that rewards growth over innovation, loyalty over monopolies, and management over actual creation.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。