Written by: Founder of Byte and DeepAI
Recently, the internal strife within OpenAI has made many people very uneasy, after all, AI has gradually penetrated into our lives. This has also sounded the alarm, no matter how AGI develops in the future, do not hand over the button of superintelligence to a few people, as humanity cannot bear such a huge risk.
Given that we are at a critical juncture of epochal change, the author will think from a completely new perspective, why do we need decentralized AI? And how to achieve decentralized AI?
Battle for Computing Power
In the past year, the conflict between the tech and business factions has been accumulating. The former is represented by scientists, and the latter is represented by capital and entrepreneurs. There have been various conflicts of interest that have sparked the following three key points:
In June 2023, at Tel Aviv University in Israel, OpenAI's chief scientist Su Shen and CEO Sam had a joint interview. The divergence in OpenAI's direction is no longer a secret. If interested, you can review it.
In November 2023, the CEO announced the GPT Store, and OpenAI began its path to commercialization. Due to overlapping models, this had a significant impact on POE, a chatbot aggregation platform, whose founder turned out to be an independent director of OpenAI.
One week after the release of GPTs, OpenAI suspended the paid upgrade for GPT-Plus. No one would refuse profit, and the main reason for the suspension should be insufficient revenue, as the $20 monthly fee per user is not enough to cover the reasoning cost of GPT-4. Before the profit model of GPT Store becomes clear, OpenAI can only try to minimize losses to retain old users.
Regarding the internal strife, there are many opinions. After all, the official statement only mentioned "the CEO's lack of candor." As someone engaged in AI research and development, I will provide a realistic perspective. Let's put aside the idealistic sentiments on the surface, the essence of the internal strife is the divergence in direction, specifically, how computing resources are allocated. Currently, the business faction is likely to prevail, as the main players will bow to immediate interests. In any case, Microsoft, which holds the lifeline of computing power, is the one calling the shots.
The main uses of computing power for large models are early-stage research and development investment and daily reasoning costs. The cloud computing power provided by Microsoft is limited and not available all at once. The tech faction wants to stick to their ideals and continue AGI research with computing power, while the business faction needs to face reality and invest computing power in operations and implementation. Only by creating super applications can they obtain more resources.
In an article at the GPT Store launch event, I mentioned that OpenAI is currently facing internal and external challenges. Although ChatGPT is very popular, the conversion rate is less than 5%, the pricing is not high, and the key issue is the high user churn rate after the initial trial. The newly released GPT Store's creator economy needs validation. OpenAI's path to commercialization is long, but I did not expect the conflict to erupt so quickly.
Resource Efficiency
AI chip supply falls short of demand, and computing power is scarce, but currently, large models are mainly used in chat scenarios. The potential of AI is enormous, but currently, most resources are wasted, like using a Ferrari for food delivery. The industry's urgent task is to improve the efficiency of AI models and reduce resource misallocation.
This is also why Musk's X AI focuses on fundamental scientific problems. Although it has not yet become a mass application, its future potential is enormous, after all, the root of AGI lies in breakthroughs in basic science. We need to put all our limited resources into the most urgent challenges to create Pre-AGI that can create the next generation of AGI, in order to accelerate the arrival of AGI. Musk is truly a businessman who understands science, and his strategy is very clear.
With the release of the GPT Store, the battle of the hundred models is turning into a battle of the "A" models, where "A" refers to Agent, which is generally translated as intelligent entity in Chinese. As everyone has seen, in just two weeks, numerous homogeneous Agents have emerged, but they do not create much value. At the same time, the market lacks a mechanism for protecting intellectual property rights, making it difficult to incentivize the creation of high-value Agents. The GPT Store has unleashed the imagination of AI practitioners and opened up a completely new track, but if resource efficiency cannot be improved, it will ultimately become a white elephant.
Before achieving AGI, improving the commercial efficiency of existing AI is crucial. On the one hand, it involves reducing the cost of large models, which is a part of cost savings, and on the other hand, it involves discovering the value of AI, which is part of open sourcing. The old business models cannot implement price discovery for AI models, let alone improve computing power efficiency. The market needs a completely new AI economy.
The lack of commercial benefits for AI has a serious downside, leading to the monopoly of new AI. Giants like OpenAI can obtain a large amount of capital subsidies, but for small and medium-sized entrepreneurial teams, especially open-source projects, they will be squeezed out of the market, especially in the face of a new "religion" like OpenAI, we will have no choice in the end.
As someone who has been involved in the AI and Web 3 fields for many years, I have been exploring a decentralized AI economic mechanism for the past six months. I have been pondering a question: why do we need decentralized AI? On the one hand, it is because collective collaboration can bring forth wisdom and improve resource efficiency, and on the other hand, the relationship between human and machine security requires greater robustness, rather than relying on a single node. I will explain each point one by one.
Decentralized Agent Aggregator
Imagine a completely new collective intelligence based on a permissionless public market, where AI users directly provide feedback to AI providers, and AI providers receive corresponding commercial incentives based on service adoption, reducing the chain between research and development and commercialization, and achieving efficient matching. Based on this new model, resource efficiency is improved, computing power waste is reduced, and it also provides more choices for both sides of the market.
Agents, intelligent entities, are the best carriers for decentralized market transactions and also the agents of AI energy, and are crucial for the future interaction between humans and machines. Next, we will enter an era of coexistence with Agents, and Agents will also form a Marketplace, one is a centralized form like the GPT Store, and the other is a decentralized form like Bitcoin.
In the decentralized AI market I have designed, there are mainly two groups: Agent Maker and Agent Taker, and there is an incentive system for both sides. We call Agent Maker "miners," who provide Agent models and receive system rewards based on service adoption, to offset costs and gain AI commercial benefits. Agent Taker is the demand side, selecting Agents and adopting their services, and we call them curation nodes, which are crucial for the large-scale commercial adoption of AI.
Agent Taker is somewhat similar to the role of a blockchain oracle, using a trusted mechanism to discover valuable Agent models, incentivizing Agent creators to provide AI services for priority scenarios, and achieving efficient resource matching. This will better solve the following problems:
What are valuable Agent models? This does not need to be judged by a centralized platform, and advertising is also ineffective. Only when the demand side actually adopts them, can AI suppliers automatically match them, and a free market can quickly bring forth new creativity.
What are the priority commercialization scenarios for AI? This is also difficult for a centralized platform to plan efficiently, because AI application scenarios have a huge long-tail effect, and some are very niche but highly valuable, and market self-regulation will be more efficient.
Two-sided Market Incentives
In the decentralized AI economic model, a two-sided market needs a common medium to incentivize its operation. We have designed a token called "Force" to promote efficient matching of Agent supply and demand, essentially anchoring the efficiency of computing power.
How does Force incentivize?
- Curation nodes: The users of Agents, do not need to pay monthly fees, only need to stake Force to exchange AI usage quotas. While using AI, they are also evaluating the value of Agents, and the system will build their on-chain reputation, and the staked Force value will also appreciate with the system.
- Miners: In response to market demand, they provide Agent models and receive Force rewards based on adoption rates. As the number of Agent application scenarios increases, the market demand grows, and Force gradually appreciates. Although the mining rewards will halve every year, compared to traditional ways of monetizing Agents, miners still have decent earnings.
Accumulated Reputation Value
In a centralized Agent market, AI platforms are overly important and currently lack effective property rights protection, leading to severe homogenization of Agents and making it difficult to create a thriving and effective creator economy. In a decentralized system, each economic entity is independent and establishes on-chain reputation based on supply and demand, improving matching efficiency and accumulating value. Only by creating a fair and reasonable environment for participants can collective wisdom truly emerge.
Board of Directors vs DAO
The drama at OpenAI is not just a spectacle for us to watch, but a warning bell. AGI involves the common interests of all humanity, not just the well-being or disaster of a single company. The founders of OpenAI realized this from the beginning, which is why they designed a non-profit organizational system, but they still cannot escape the "rule of man" curse. The traditional board of directors model is no longer suitable for AI governance.
Firstly, transparency is crucial. After the internal strife at OpenAI, we realized the importance of transparency. If it were a DAO instead of a board of directors, there wouldn't be so much speculation. It's hard to imagine that such an important direction as AGI would be left to the decision of a few people. On one hand, OpenAI's influence is too strong, and competitors have not caught up yet. On the other hand, it is also the downside of traditional organizational forms, where outsiders cannot intervene and can only wait.
Secondly, robustness is important. It is uncertain whether OpenAI will continue to adhere to its safety principles in the future, but becoming an AI giant is too tempting, and no one may be able to control it in the future. It is possible that someone, for their own selfish interests, will ignore the overall risk, and this centralized system harbors huge risks.
I am not advocating the superiority of DAO, although DAO has been developing for many years, it still has many problems, but it can provide us with a way of "code governance."
The goal of AGI is to achieve highly autonomous systems, and DAO is also. DAO systems are built on the blockchain, where code is law, using code to manage AI, minimizing human interference as much as possible, making it impossible to do evil. In a sense, the goals of both are the same, so AGI DAO is very necessary.
Decentralized AI economies naturally form DAO systems, with transparent collective autonomy, reducing single-point risks. At the same time, DAO will effectively incentivize more AI practitioners, which will be a great boost to the AI open source movement. When humanity has more AI options, we will not be in a passive position, even if a single AI giant, even if it is a non-profit organization, is in a black box state, it is still frightening.
Conclusion
Returning to reality, we still have a long way to go on the path to AGI. On one hand, we need to think about how to improve computing power efficiency, and on the other hand, how to build open mechanisms. Even if we technically achieve AGI, we still need supporting infrastructure and organizational governance.
Before achieving AGI, we need to consider how humanity needs to evolve to be worthy of AGI. If we are not ready, we may not want to press the button to release "superintelligence" yet. If we can achieve decentralized governance through efficient mechanisms, we do not need to rely on a few idealists, as they have already taken on too much. Of course, this mechanism is not simply DAO with on-chain voting, but through open markets forming a collective force, such as price discovery, incentive efficiency, dynamic game theory, and other designs.
Finally, I would like to pay tribute to the Chief Scientist of OpenAI through this article. Although recent public opinion has been unfavorable to Su Shen, he is the true father of ChatGPT, a pure believer in safe AGI. After this internal strife, Su Shen is likely to become a sacrificial lamb and be condemned by most people. As the saying goes in "The Three-Body Problem," humanity does not thank Luo Ji, but we should remember that there was an idealist who made a contribution.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。