Understanding Jensen Huang's Physical AI: Why is it said that the opportunities in Crypto are also hidden in the "nooks and crannies"?

CN
3 hours ago

Written by: Haotian

What did Jensen Huang really say at the Davos Forum?

On the surface, he was promoting robots, but in reality, he was conducting a bold "self-revolution." With a few words, he ended the old era of "stacking GPUs" and unexpectedly set a once-in-a-lifetime ticket for the Crypto track?

Yesterday, at the Davos Forum, Huang pointed out that the AI application layer is exploding, and the demand for computing power will shift from the "training side" to the "inference side" and "Physical AI" side.

This is quite interesting.

As the biggest winner of the AI 1.0 era's "computing power arms race," NVIDIA is now actively calling for a shift towards the "inference side" and "Physical AI," which actually conveys a very straightforward message: The era of "miracles through brute force" by stacking GPUs to train large models is over; future AI competition will revolve around "applications are king" based on real-world application scenarios.

In other words, Physical AI is the second half of Generative AI.

Because LLMs have already read all the data accumulated by humanity over decades on the internet, but they still do not know how to twist open a bottle cap like a human. Physical AI aims to solve the "unity of knowledge and action" problem beyond AI intelligence.

Because Physical AI cannot rely on the "long reflex arc" of remote cloud servers; the logic is simple: if ChatGPT takes a second longer to generate text, you might just feel a lag, but if a bipedal robot is delayed by a second due to network latency, it could fall down the stairs.

However, while Physical AI seems to be a continuation of Generative AI, it actually faces three completely different new challenges:

1) Spatial Intelligence: Enabling AI to understand the three-dimensional world.

7n0K9oewv3r821Id2M9WkExWv0a3yTdWCzuiQJcL.jpeg

Professor Fei-Fei Li once proposed that spatial intelligence is the next North Star for AI evolution. For robots to move, they must first "understand" their environment. This is not just about recognizing "this is a chair," but understanding "the position, structure of this chair in three-dimensional space, and how much force I need to move it."

This requires massive, real-time, 3D environmental data covering every corner indoors and outdoors;

2) Virtual Training Grounds: Allowing AI to train through trial and error in a simulated world.

DKsarS2vfLA81HZ0zs4MouKjLgZkocwGRjtJ8JSw.jpeg

The Omniverse mentioned by Jensen Huang is actually a type of "virtual training ground." Before entering the real physical world, robots need to train in a virtual environment by "falling ten thousand times" to learn how to walk; this process is called Sim-to-Real, transitioning from simulation to reality. If robots were to trial and error directly in reality, the hardware wear and tear costs would be astronomically high.

This process requires exponential throughput demands on physical engine simulation and rendering power;

3) Electronic Skin: "Tactile data" is a data gold mine waiting to be tapped.

9xUg8qy7e3IyJP1ZCpHGlJetRFj9oLom3uzQImQo.jpeg

For Physical AI to have a "sense of touch," it needs electronic skin to perceive temperature, pressure, and texture. This "tactile data" is a new asset that has never been scaled for collection before. This may require large-scale sensor collection; for instance, the "mass-produced skin" showcased by Ensuring at CES integrates 1,956 sensors on a single hand, enabling robots to perform the miraculous task of cracking eggs.

These "tactile data" are a new asset that has never been scaled for collection before.

After reading this, you must feel that the emergence of Physical AI rhetoric provides a significant opportunity for many hardware devices such as wearables and humanoid robots, which were criticized a few years ago as "oversized toys."

In fact, I want to say, in the new landscape of Physical AI, the Crypto track also has excellent ecological opportunities for complementarity. Here are a few examples:

  1. AI giants can send street view cars to scan every main street in the world, but they cannot collect data from street corners, community interiors, and basements. By utilizing token incentives provided by DePIN network devices, mobilizing global users to use their personal devices to fill in these data gaps could be possible;

  2. As mentioned earlier, robots cannot rely on cloud computing power, but to utilize edge computing and distributed rendering capabilities on a large scale in the short term, especially to complete many simulations to reality data. By leveraging a distributed computing power network to gather idle consumer-grade hardware for distribution and scheduling, it could be put to good use;

  3. "Tactile data," aside from large-scale sensor applications, is inherently very private. How can we encourage the public to share this privacy-involved data with AI giants? A feasible path is to allow data contributors to gain data rights and dividends.

In summary:

Physical AI is the second half of the web2 AI track that Huang is calling out, and for the web3 AI + Crypto track, such as DePIN, DeAI, DeData, etc., isn't it the same? What do you think?

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink