AI is sending red envelopes, but this is not what I want. AgentFi - from red envelopes, collaboration to settlement, why do I feel a bit sad about this AI red envelope war?

CN
8 hours ago

Written by: Yokiiiya

I previously posted a video titled "If Yuanbao Could Send Red Envelopes, That Would Be the Real AgentFi." Recently, the AI red envelope war has truly arrived. But to be honest, I am not as excited as I imagined; instead, I feel a bit sad. This has almost nothing to do with what I envisioned.

The scenario I imagined was actually very specific: it wasn't the platform sending red envelopes to attract new users, but rather a "Yuanbao" acting as an Agent, capable of understanding my goals, my constraints, and, with my authorization, sending specified amounts of red envelopes to designated people at the right time.

That kind of "sending money" is not a marketing action but rather a systematic behavior that is understood, authorized, and executed precisely.

So, when the AI red envelope war truly arrived recently, my first reaction was not excitement but a bit of sadness. It wasn't because there weren't enough red envelopes or because they weren't sent quickly enough, but because this matter has almost nothing to do with the kind of "Agent sending money" I initially envisioned. The AI red envelopes in reality resemble a familiar technical shell game: the model has changed, the entry point has changed, but the problem framework remains the same. The money is still being sent by the platform, and the logic remains subsidies, attracting new users, viral growth, and positioning.

What I originally anticipated was something entirely different: money being dispatched by an Agent, actions serving a specific goal, and sending money itself being the result of system understanding, not a means of growth.

It is this gap that made me realize: what we are discussing may not be "Will AI send red envelopes?" but rather—What exactly are we using to understand AI?

1. The "Familiarity" of the AI Red Envelope War

This familiarity comes not just from the red envelopes themselves but from the narrative framework behind them that we are already very familiar with. Red envelopes, subsidies, free credits, invitation cashback, viral growth. Different companies' AI products compete for user attention in almost the same way. Various WeChat groups quickly filled up with links to "Yuanbao Sending Red Envelopes." This scene is not unfamiliar.

Ten years ago, it was ride-hailing apps, then food delivery, e-commerce, community group buying; later, it was content platforms, education, SaaS. Almost every time a new technology truly scales, it goes through a "subsidy period."

This methodology was once very effective and indeed propelled multiple industries through early expansion. AI is just the latest object to be subjected to this template. On the surface, everything seems reasonable: the gap in model capabilities has not yet been fully widened, product experiences are highly similar, and competing for users through price and subsidies is almost the most direct and safest choice. But what truly made me stop and think was a more fundamental question: Into which problem framework is AI being placed? What exactly are we understanding it as?

If you observe closely, you will find that discussions about AI are slowly diverging. One narrative treats AI as a more powerful tool or product. The core questions are clear: Is the experience good? Is the capability strong? Are there many users? And can it enter mainstream usage scenarios faster? In this narrative, red envelopes are a very natural and handy tool.

At the same time, another narrative is quietly emerging. This one focuses not just on "how many people are using AI," but rather: Is the position of AI within the entire system changing? In this understanding, AI is no longer just an object used by people but is beginning to be assumed as a participant that can be called upon, collaborated with, and divided in labor. When the problem is redefined this way, the focus of the discussion shifts: from growth to structure; from traffic to rules; from subsidies to settlement and order.

This is not a geographical difference, nor is it entirely a divergence in technical routes. More accurately, it is the difference in understanding among different entrepreneurs regarding "What problem is AI really solving?"

Some believe that AI needs to be used by as many people as possible first; others believe that what matters more is placing AI in a suitable position at the system level. Neither choice is absolutely right or wrong. They respond to the realities of different stages and different goals. But when I see AI being pulled back into the familiar narrative of red envelopes, that sadness comes from this contrast. It is not because this path is wrong, but because it reminds me: we may be using a set of very mature answers to respond to a changing question. What I care about has never been "Who is sending red envelopes?" but whether the money is being dispatched by the Agent according to goals.

2. Red Envelopes Solve Growth Problems, But Not Necessarily AI's Core Issues

First, let's clarify one thing: the AI red envelope war itself is not surprising. Red envelopes, subsidies, free credits have always been very mature growth tools. They solve several very clear problems: getting more people to be willing to use it for the first time, helping users form short-term usage habits, and quickly occupying the default choice in products with similar capabilities.

If you treat AI as a tool-oriented product for people, this logic is almost inevitable. But the problem is not whether red envelopes "are useful," but rather: What type of question do they implicitly answer? Red envelopes are good at solving—how people start using AI. But they can hardly answer—how AI will be truly used afterward.

As AI is increasingly embedded into systems and processes, being called upon, combined, and collaborated with, the users are no longer just people but may be another Agent or a set of automated logic. The space for red envelopes to play a role is actually shrinking. This does not mean that red envelopes are the wrong choice.

When we continue to use this growth narrative to understand AI, a subtle misalignment will occur. We will become increasingly adept at discussing: which model is easier to use, which is cheaper, which has better subsidies; yet we will increasingly neglect to ask: when AI is truly embedded in systems, processes, and collaborative relationships, how should their division of labor, responsibilities, and value be defined? These questions may not seem urgent in the early stages. But once AI usage begins to detach from "personal experience," they will not automatically disappear as the user scale expands. On the contrary, they often only become truly exposed after scale appears. Red envelopes are platform behavior; settlement is system behavior.

3. Turning Point: When AI's Main Users Are No Longer Just People

If the first two parts discuss narrative choices, the real turning point comes from a change that is already happening: AI's main users are no longer just people. More and more AI is not being "opened" and "used" by a specific user but is being embedded into systems and processes, automatically called in the background. They appear in approval chains, transaction systems, risk control, and customer service processes, as well as in every node of content production, investment research analysis, and automated operations. In these scenarios, the "users" of AI are often not people but another Agent or a whole set of automated logic.

When the users change, some assumptions that were originally taken for granted also begin to become unstable. We are used to explaining why a person continues to use a product through experience, emotions, and usage habits, but these factors are almost meaningless to an Agent. An Agent does not develop a fondness for red envelopes, nor does it form loyalty due to subsidies; it only cares about a few very rational things: whether it can be called upon stably, whether the output is predictable, and whether the costs and benefits are clear. As AI calls increasingly occur between machines, the growth narrative centered around "people" begins to gradually lose its grip.

This change will be further amplified in scenarios of multi-Agent collaboration. When multiple AIs participate in the same business chain, the questions shift from "Is it easy to use?" to a set of more unavoidable questions: Who completed which part of the work? Who is responsible for the final result? How much value did each create? If deviations or errors occur, who should bear the consequences? These questions are not prominent in the "one person using one AI" stage, but once entering multi-Agent collaboration, they quickly become key to whether the system can operate sustainably in the long term.

More importantly, these types of questions will not naturally disappear with the improvement of model capabilities. On the contrary, the stronger the model and the more complex the collaboration, the harder it becomes to obscure these issues. In this sense, AI is transitioning from a purely "tool problem" to an "order problem." It is no longer just about whether it can help me get things done, but whether the system can still be understood, managed, and settled when it works alongside other AIs.

When we shift our perspective here, the narrative divergence mentioned earlier becomes clearer. Some entrepreneurs are still solving the question of "How to get more people to start using AI," while others have begun to attempt to answer another slower, more fundamental question: How should the entire system operate when AIs collaborate and call upon each other? This is not a more exciting question, but it is likely to determine whether we can still maintain control over AI once it truly penetrates deep into the system.

4. New Narratives Emerge: When AI Collaboration Begins to Require "Settlement" and "Confirmation"

As the use of AI shifts from "interaction between people and models" to "collaboration among multiple Agents in a system," a class of previously inconspicuous questions begins to emerge: Does the system really know how these collaborations occur?

In systems where multiple Agents participate simultaneously, some Agents are responsible for breaking down tasks, some for execution, and some for verifying results. If the system cannot confirm who completed each step, under what conditions it was triggered, and what impact it had on the final result, then the collaboration itself will become a black box.

In the product phase centered around "people," this ambiguity is not fatal. A person using one AI can generally judge whether the experience is good or the results are accurate through subjective feedback.

However, when AIs begin to collaborate with each other and are embedded into systems and processes, this opacity can quickly accumulate into structural risks. It is against this backdrop that a number of projects have begun to approach the long-missing two aspects of AI collaboration: confirmation (who did what) and settlement (how value is distributed). Their technical routes differ, and their entry points are not the same, but they are all addressing the same type of question: When AI is no longer just a tool but a participant in the system, does this system possess the most basic understandability and sustainability?

The following companies represent four different solutions.

When looking at Autonolas, Bittensor, Fetch.ai, and Kite AI together, one clear observation emerges: they are not in the same product track and there is almost no direct competition among them, yet they are all earnestly addressing different facets of the same question from their respective positions.

  • Autonolas focuses on whether multiple Agents can truly form long-term, reusable collaborative relationships around tasks. It prioritizes whether the collaborative structure can be established and sustained.

  • Bittensor is concerned with whether the system can distinguish the marginal contributions of different participants when multiple models or Agents are simultaneously involved in production, and based on that, complete value distribution.

  • Fetch.ai emphasizes the decision-making level: when Agents participate in real system operations, whether the identity, authority, and boundaries of responsibility are clear enough, and whether decision outcomes can be traced and attributed.

  • Kite AI attempts to push the question further: if the collaboration process itself cannot be confirmed, then both incentives and settlements lack a solid foundation.

These different entry points point to a common change that is happening: AI is transitioning from "a tool used by people" to "a subject participating in system operations." Once the subject changes, the foundational capabilities that the system relies on will also change. Red envelopes, subsidies, and growth methods remain important; they address the question of "how people start using AI." However, when AI begins to collaborate with each other and embed into processes and systems, what truly determines the upper limit of the system has become more specific and fundamental:

  • Whether collaboration has truly occurred (concerned by Autonolas)

  • Whether contributions can be distinguished and priced (concerned by Bittensor)

  • Whether decision-making has clear boundaries of responsibility (concerned by Fetch.ai)

  • Whether the collaboration process has a basis that is confirmable and can be settled (attempted to be answered by Kite AI)

This is not a choice made by a single company, but rather the result of a set of questions being addressed by different teams at the same time. This is the true reason for the emergence of a new narrative.

5. Conclusion: It's Not That the Red Envelopes Are Wrong, But That the Questions Have Changed

Returning to the initial AI red envelope war. It is neither absurd nor outdated. On the contrary, it is very mature and can even be said to be one of the optimal solutions that have been repeatedly validated over the past decade.

When a new technology needs to quickly enter the public eye, form usage habits, and compete for default choice, red envelopes, subsidies, and free credits are indeed the most efficient tools. They solve a clear problem: how to get "people" to start using AI. But what makes me hesitant, even a bit sad, is not the red envelopes themselves, but whether we are using a set of very familiar answers to respond to a changing question.

Because on another, less noisy path, AI has begun to be understood in a different way. It is no longer just a tool called upon by people, but is assumed to be a subject that can collaborate, divide labor, make decisions, and participate in operations over the long term within the system. When AI begins to collaborate with each other, what the system truly needs to answer is no longer "Have users come?" but rather: Did collaboration occur? Where do contributions come from? Are responsibilities clear? Has value truly been settled?

These questions cannot be solved by red envelopes, nor can they be easily bypassed by subsidies. They are slower, more fundamental, and less market-friendly, but they directly determine whether a multi-Agent system has the potential for long-term operation. From this perspective, the new narrative is not a denial of the old narrative.

Red envelopes remain the optimal solution in the old world, but when AI begins to penetrate deep into systems and starts collaborating with each other, what truly determines the future direction is no longer the intensity of subsidies, but whether the rules are established, whether the structure is clear, and whether the settlement is trustworthy. Perhaps it is precisely for this reason that when I see AI being pulled back into the familiar narrative of red envelopes, I feel a sense of loss.

Not because this path is wrong, but because it reminds me: We are at a point where we need to change the questions, not just the answers. When money is still just being sent by the platform, AI remains in the tool stage; when money begins to be dispatched by Agents, AI truly enters the system era.

—And this may be where the true beginning of the AI era lies.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink