Former Google CEO Schmidt: AI is like electricity and fire, these 10 years will determine the next 100 years.

CN
5 hours ago

Whoever forms a closed loop first will win the future.

Source: AI Deep Research Institute

In 2025, the AI world is being torn apart by invisible tensions:

On one side, there is the explosive growth of model parameters, and on the other, the limits of system resources.

Everyone is asking: Who is stronger, GPT-5, Claude 4, or Gemini 2.5? But former Google CEO Eric Schmidt offered a deeper insight in a public speech on September 20, 2025:

"The arrival of AI in human history is equivalent to the invention of fire and electricity. The next ten years will determine the landscape for the next hundred years."

He is not talking about model performance or the proximity of AGI; he is saying:

AI is no longer a tool for enhancing efficiency but is redefining the way business operates.

Meanwhile, in a conversation at the well-known Silicon Valley investment firm a16z, chip analyst Dylan Patel pointed out:

"Exaggeratedly speaking, now grabbing GPUs is like grabbing 'drugs'; you need connections, channels, and quotas. But that's not the point; the real competition is who can build a complete system to support AI."

The viewpoints of both individuals point to the same development trend:

  • Parameters are not the boundary; electricity is the boundary;

  • Models are not the moat; platforms are the moat;

  • AGI is not the goal; implementation is key.

If we say that the main theme of the past three years has been the explosion of capabilities;

Then the main theme of the next decade will be infrastructure.

Section 1 | AI is no longer a tool upgrade but a system reconstruction

In this conversation, Eric Schmidt bluntly stated:

"The arrival of AI in human history is on the same level as the invention of electricity and fire."

He is not emphasizing how smart AI is but reminding everyone that the ways we are familiar with working, managing, and making money may need to change fundamentally.

It's not about having AI help you write faster, but about letting AI decide how to write.

Schmidt said that the strongest AI tools today are no longer assistants but are becoming:

A brand new infrastructure, like the power grid, becoming standard configuration for all organizations.

This statement directly overturns the perception of AI that people have held for the past few years.

In other words, this is not about personal skill enhancement or team efficiency optimization, but a fundamental change in the way the entire organization operates:

  • Decision-making methods have changed; AI participates in thinking;

  • Writing, programming, customer service, and finance all have AI partners;

  • Data input, result evaluation, and feedback mechanisms have all been redesigned by AI.

This comprehensive organizational transformation made Schmidt realize that it is not about pre-establishing detailed processes but about allowing AI to gradually adapt and optimize in practical applications.

According to him, several startups he is currently involved with have adopted this method, not by making complete plans first but by letting AI participate directly in work, continuously adjusting and improving in practice.

So what he is talking about is not stronger models but whether organizations should shift to a new AI-native form.

AI is transitioning from a tool to the infrastructure of enterprise operations.

Section 2 | The limitation to AI development is electricity

In the past, we always thought that the development of AI capabilities would be hindered by technology:

  • Insufficient chip performance, models cannot compute;

  • Algorithms are too complex, inference speed is too slow.

But Eric Schmidt said that the real limitation to AI development is not technical parameters but electricity supply.

He mentioned a specific data point:

"By 2030, the U.S. will need an additional 92GW of electricity to support the demand of data centers."

What does this mean?

A large nuclear power plant has a capacity of only 1 to 1.5GW.

92GW is equivalent to the output of dozens of nuclear power plants. The reality, however, is that the current number of nuclear power plants under construction in the U.S. is basically zero.

This means that the future problem is not that model technology is not advanced enough, but that electricity supply cannot keep up with training demands.

Schmidt even mentioned a surprising example to Congress: they might need to train American models overseas, such as at power generation bases in Middle Eastern countries.

(Sam Altman just released a blog: "Infinite Possibilities in the Age of Intelligence")

This thirst for electricity is not alarmist. Just on September 23, OpenAI CEO Sam Altman released a blog proposing a more radical direction: we hope to build a factory that adds 1GW of AI computing power facilities every week, with electricity consumption comparable to that of a city.

He clearly pointed out that this will require collaborative breakthroughs across multiple systems, including chips, electricity, robotics, and construction.

In his words: "Everything starts with computation."

Altman's goal is not a distant vision but a foundational infrastructure that is currently being laid out. It is the realization of Schmidt's statement that "AI will become the new power grid."

In fact:

Model training itself is not expensive; the real costs are electricity consumption, operating time, and equipment maintenance.

As inference tasks become more numerous and generated content becomes more complex (images, videos, long texts), the electricity demand of AI factories is becoming a new bottleneck for computing power.

Dylan Patel also mentioned in another conversation that when building AI systems, one must consider not only how fast the chips are but also cooling, electricity costs, and stability. He put it more vividly:

"An AI factory is not just about buying a bunch of GPUs; you also need to consider energy scheduling and continuous operating capability."

So this is not a chip problem but a question of whether electricity can keep up.

And when electricity cannot meet the demand, a chain reaction occurs:

  • Models cannot be trained;

  • Inference costs rise;

  • AI tools cannot be deployed on a large scale;

  • Ultimately losing the possibility of implementation.

Schmidt believes that the biggest real challenge facing AI implementation today is that infrastructure cannot keep up. Without sufficient energy support, even the most advanced model capabilities cannot be utilized.

Therefore, the next battlefield for AI is not in laboratories but in power plants.

Section 3 | It's not about who has the chips, but who can use them

Even if the electricity issue is resolved, the problem is not over. Can you really run all these chips, models, and tasks?

Many people think that as long as they get the most advanced chips like H100 and B200, the AI factory is built.

But Dylan Patel immediately poured cold water on that idea:

"GPUs are currently very scarce; you have to text around asking, 'How much stock do you have? What’s the price?'"

He continued:

"But just having chips is not enough. The core is to make them work effectively together."

In other words, the chips themselves are just components; what truly determines whether an AI factory can operate continuously is whether you have the ability to integrate these chips to work together.

He divides this integration capability into four levels:

  1. Computing foundation: hardware basics like GPUs and TPUs;

  2. Software stack: training frameworks, scheduling systems, task allocators;

  3. Cooling and power management: not just having electricity but also controlling temperature, load, and electricity costs;

  4. Engineering capability: who optimizes models, tunes computing power, and controls costs.

This is the core of what Dylan refers to as the "AI factory": an AI factory is not a model or a card but a complete set of continuous engineering scheduling capabilities.

You will find that an AI factory not only requires a lot of computing power but also needs complex engineering coordination:

  • A bunch of GPUs are "raw materials";

  • Software scheduling is the "control room";

  • Cooling and power are the "electricians";

  • The engineering team is the "maintenance crew."

In simple terms, the focus has shifted from "building models" to "building infrastructure."

Dylan observed an interesting phenomenon: you see that today's chip companies are not just selling cards but are starting to "package construction." Nvidia is beginning to help customers integrate servers, configure cooling, and build platforms, effectively becoming a platform itself.

(Source: Reuters report)

On the same day this interview was published, Nvidia and OpenAI announced a future cooperation intention: Nvidia will provide OpenAI with up to 10GW of data center resources, with an investment scale potentially reaching hundreds of billions of dollars.

Sam Altman stated something in the announcement that perfectly corroborates the above logic:

Computing infrastructure will be the foundation of the future economy. Nvidia is not just selling cards and supplying chips; they are also deploying, building, and operating the entire AI factory together with them.

This indicates a trend: the ones truly capable of forming a closed loop are not the smartest people but those who understand how to implement it.

That is:

  • Being able to create a model is one thing;

  • Being able to make the model run stably every day is another.

AI is no longer a product that can be used right after purchase; it is a complex engineering system that requires continuous operation. The key is whether you have the capability to operate this system in the long term.

Section 4 | The diffusion of AI capabilities becomes a trend; where is the focus of competition?

While everyone is still competing for operational capabilities, new changes have already emerged.

AI models are getting better and smarter, but Eric Schmidt issued a warning:

"We cannot stop model distillation. Almost anyone who can access the API can replicate its capabilities."

What is distillation? Simply put:

  • Large models are powerful, but deployment costs are too high;

  • Researchers will use it to train a smaller model, allowing the smaller model to mimic its way of thinking;

  • Low cost, fast speed, high accuracy, and difficult to trace.

Just like you cannot replicate a top chef, but you can teach another person to achieve 80% similar results through the dishes they create.

The problem arises: the easier it is to transfer capabilities, the harder it is to restrict the model itself.

(Dylan Patel, a well-known chip industry analyst focused on AI infrastructure research)

Dylan Patel also mentioned an industry trend:

Currently, the cost of distillation only accounts for about 1% of the original training but can reproduce 80-90% of the original model's capabilities.

Even if OpenAI, Google, and Anthropic protect their models tightly, they cannot stop others from obtaining similar capabilities through distillation.

In the past, everyone was competing on who was stronger; now the concern is who can still maintain control?

Schmidt said in an interview: the largest models will never be open. The diffusion of smaller models is inevitable.

He is not advocating for closure but reminding us of a reality: the speed of technological diffusion may far exceed the pace of governance.

For example, many teams are already using the GPT-4 API to distill a GPT-4-lite:

  • Low cost, easy to deploy;

  • No clear external identification;

  • Users feel almost the same experience.

This brings a dilemma: the capabilities of models may diffuse like "air"; however, the source of the models, accountability, and usage boundaries are difficult to define clearly.

What Schmidt is truly worried about is not that the models are too powerful, but rather:

"When more and more models possess strong capabilities but are unregulated, difficult to trace, and lack clear accountability, how can we ensure the trustworthiness of AI?"

This phenomenon is no longer hypothetical but a current reality.

As the diffusion of AI capabilities has become an irreversible trend, simply possessing advanced models is no longer a moat. The focus of competition has shifted to how to better utilize and serve these capabilities.

### Section 5 | The Key to the Platform is Becoming More Accurate with Use

So ultimately, what matters more than whether you can create something is: can you build a platform that gets better the more it is used?

Eric Schmidt provided his answer:

"Successful AI companies in the future will not only compete on model performance but also on their ability to learn continuously."

In simpler terms: it’s not about creating a product once and being done; it’s about building a platform that becomes smarter, more user-friendly, and more stable the more it is used.

He further explained:

The core of a platform is not its functions but making others unable to do without you.

For example:

  • The power grid is not about the light bulbs shining but about enabling all lights to shine;

  • An operating system is not about having many functions but about allowing a batch of applications to run stably;

  • The same goes for AI platforms; it’s not about creating a specific intelligent assistant but enabling other teams, users, and models to connect, call, and enhance.

An AI platform is not a specific function but a continuously operating service network.

He also advised young founders: don’t just ask whether the product is perfect. Look at whether it has formed a path of "use → learn → optimize → reuse."

Because: a platform that can learn continuously has the potential for long-term survival.

Dylan Patel added that this is actually the path to Nvidia's success. Jensen Huang has been CEO for thirty years, relying not on luck but on continuously binding chips and software into a closed loop: the more customers use it, the better he understands what they want; the better he understands their needs, the better the product becomes; the better the product, the harder it is for customers to give up.

This creates a virtuous cycle, becoming more valuable the more it is used.

It’s not about "launching at the peak," but about a platform that can grow continuously.

Schmidt summarized it clearly: can you build such a growth mechanism? It may start small, but can it continuously adapt, expand, and update?

His judgment on the future successful AI platforms is:

It’s not about what code you wrote, but whether you can keep a platform alive and make it stronger over time.

### Conclusion | Whoever Forms a Closed Loop First Will Win the Future

Eric Schmidt said in the interview:

"AI is like electricity and fire; these ten years will determine the next hundred years."

The capabilities of AI are ready, but where to go, how to build, and how to use them are still unclear.

The current focus is not on waiting for the next generation of models but on effectively utilizing the existing AI. Stop always thinking about when GPT-6/DeepSeek R2 will be released; first, get the tools you have running in customer service, writing, data analysis, and other scenarios. Ensure that AI can work stably 24 hours a day, rather than just dazzling at launch events.

This is not a competition of intelligence but a contest of execution.

Whoever can first bring AI from the laboratory to reality will seize the initiative for the next decade.

And this "closed loop competition" has already begun.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink