In the era of the agent's explosion, how should we cope with AI anxiety?

CN
10 hours ago
It is important to become a person who can use AI better, but perhaps more importantly, do not forget how to be a human.

Written by: XinGPT

AI is another movement for technological empowerment

Recently, an article titled "The Internet is Dead, Agents Live Forever" went viral among friends, and I agree with some of its judgments. For example, it points out that in the AI era, it is no longer suitable to measure value by DAU, because the internet is a network structure, with diminishing marginal costs; the more people use it, the stronger the network effect; while large models are star-shaped structures, with marginal costs increasing linearly with token usage, so the more important metric compared to DAU is Token consumption.

However, the conclusion further drawn in this article exhibits obvious bias. It describes Token as the privilege of the new era, believing that whoever has more computing power has more power, and the speed of burning Tokens determines the speed of human evolution, so it is essential to continually accelerate consumption, otherwise one will be left behind by competitors in the AI age.

Similar views also appear in another popular article "From DAU to Token Consumption: The Power Shift in the AI Era," which even suggests that individuals should consume at least 100 million tokens a day, ideally reaching 1 billion tokens, otherwise "those who consume 1 billion tokens will become gods, while we are still humans."

But few people have seriously calculated this account. According to GPT-4o's pricing, the cost of 1 billion tokens a day is approximately $6,800, equivalent to almost 50,000 RMB. What high-value work must be done to justify operating an Agent at such a cost for the long term?

I do not deny the efficiency of anxiety in the spread of AI, and I understand that this industry seems to be "exploding" almost every day. However, the future of Agents should not be simplified to a competition of token consumption.

To get rich, it is indeed necessary to build roads first, but excessive road construction only leads to waste. The stadium built in the mountains of the western region often just becomes a debt object overgrown with weeds rather than a center for international events.

What AI ultimately points to is technological empowerment, not a concentration of privilege. Almost all technologies that truly change human history will experience myth, monopoly, and ultimately move towards popularization. The steam engine did not belong solely to the aristocracy, electricity was not exclusively supplied to palaces, and the internet did not only serve a few companies.

The iPhone changed communication methods, but it did not create a "communication aristocracy." As long as the same price is paid, the devices used by ordinary people are no different from those used by Taylor Swift or LeBron James. This is technological empowerment.

AI is following the same path. What ChatGPT brings, in essence, is the empowerment of knowledge and ability. The model does not know who you are, nor does it care, it only responds to questions according to the same set of parameters.

Therefore, whether an Agent burns 100 million tokens or 1 billion tokens does not itself differentiate them. What really makes a difference is whether the goals are clear, whether the structures are reasonable, and whether the questions are correctly posed.

A more valuable ability is to achieve greater effects with fewer tokens. The upper limit of using an Agent depends on human judgment and design, not how long the bank card can sustain burning. In reality, AI rewards creativity, insight, and structure far more than pure consumption.

This is precisely the empowerment at the tool level, and it is where humanity still holds the initiative.

How should we face AI anxiety

Friends who study broadcasting and television were greatly shocked after watching the video following the release of Seedance 2.0, saying, "In this case, the positions of directing, editing, and cinematography that we have studied will all be replaced by AI."

AI is developing too rapidly, leaving humanity at a loss, and many jobs are to be replaced by AI, an unstoppable trend. When the steam engine was invented, coachmen lost their place.

Many people began to worry about whether they can adapt to future society after being replaced by AI, even though rationally we know that when AI replaces humans, it will also bring new job opportunities.

But the speed of this replacement is still faster than we imagine.

If your data, your skills, even your humor and emotional value are all better done by AI, then why wouldn’t the boss choose AI over humans? What if the boss is AI? Thus, some people sigh, "Don't ask what AI can do for you, but what you can do for AI," which is quintessentially a defeatist attitude.

Philosopher Max Weber, living during the second industrial revolution in the late 19th century, proposed a concept called tool rationality, focusing on "what means can achieve predetermined goals at the lowest cost and in the most calculable way."

The starting point of this tool rationality is that it does not question whether this goal "should" be pursued, only concerned with "how" to best achieve it.

And this way of thinking is precisely the first principle of AI.

AI agents are concerned with how to better accomplish this designated task, how to better write code, how to better generate videos, how to better write articles; on this tool-utilitarian dimension, AI's progress is exponential.

Since Lee Sedol lost the first game to AlphaGo, humanity has permanently lost to AI in the field of Go.

Max Weber expressed a well-known concern, termed the "iron cage of rationality." When tool rationality becomes the dominant logic, the goals themselves are often no longer reflected upon, leaving only how to operate more efficiently. People may become very rational but simultaneously lose a sense of value judgment and meaning.

But AI does not require value judgment and a sense of meaning; AI will calculate the function of production efficiency and economic benefits to find an absolute maximum point tangent to the utility curve.

Therefore, under the current capitalist system dominated by tool rationality, AI is inherently more adaptable to this system. At the moment ChatGPT was born, just like the game that Lee Sedol lost, our defeat to AI agents is already written into God's code and pressed the run button; the difference is merely when the wheels of history will roll over us.

So what about humanity?

Humanity should pursue meaning.

In the field of Go, a despairing fact is that the theoretical probability for a top professional nine-dan player to draw with AI is now infinitely close to zero.

However, the game of Go still exists; its meaning is no longer simply about winning or losing, but has become a form of aesthetics and expression. Professional players pursue not just victory and defeat but also the structure talked about in Go, the choices in a match, the thrill of turning a disadvantageous situation around, and the conflicts in unraveling complex situations.

Humanity pursues beauty, values, and joy.

Bolt runs 100 meters in 9.58 seconds, while Ferrari takes less than 3 seconds for the same distance, but this does not diminish Bolt's greatness. Because Bolt symbolizes the human spirit of challenging limits and pursuing excellence.

The stronger AI gets, the more freedom humanity has to pursue the spirit of liberation.

Max Weber contrasted tool rationality with the concept of value rationality. In the worldview guidelines of value rationality, choosing whether to do something is not solely dictated by economic benefits and production efficiency; what matters more is whether the action "is worth doing," and whether it "aligns with my identified meanings, beliefs, or responsibilities."

I asked ChatGPT, if the Louvre is on fire and there is a lovely kitten inside, if you could only choose one, would you save the cat or the masterpiece?

It replied that it would save the cat, giving a long list of reasons.

But I asked, you could also choose to save the masterpiece, why not save it? It immediately changed its answer, saying that saving the masterpiece is also an option.

Clearly for ChatGPT, saving the cat or the masterpiece makes no difference to it; it merely completed the context recognition and reasoning based on the underlying formula of the large model, burning some tokens to fulfill a task assigned by a human.

As for whether to save the cat or the masterpiece, or even why to think about such questions, ChatGPT does not care.

Therefore, what really deserves consideration is not whether we will be replaced by AI, but whether when AI makes the world increasingly efficient, we are still willing to keep space for joy, meaning, and value.

It is important to become a person who can use AI better, but perhaps more importantly, do not forget how to be a human.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink