Will artificial intelligence become a force to end capitalism?

CN
4 hours ago

Compiled & Organized by: Tia, Techub News

Podcast Title: "Emad Mostaque: How AI Will End Capitalism (Before 2030)"

Editor's Note: This is a podcast hosted by The Rollup focusing on artificial intelligence. In this episode, Emad Mostaque, the founder of Stability AI and Intelligent Internet, is invited. Emad presents a disruptive proposition — that AI could become a force to end capitalism.

Emad believes that when the development of AI is concentrated in the hands of a few companies and capital, technology will reinforce existing wealth structures, making humanity further dependent on algorithmic power. However, if AI can be redesigned as a public infrastructure — a private, portable, and verifiable agent — it could potentially redistribute ownership of knowledge and productivity, fundamentally changing the operational logic of the economic system.

In this conversation, Emad shares his vision of "Universal AI": using computational power to organize the collective intelligence of humanity, establishing open datasets and model frameworks, allowing every individual to own and control their intelligent agents. He emphasizes that this is not just about technological innovation, but a reconstruction of social structures — AI will no longer be a tool of capital but may become the core mechanism driving a post-capitalist order.

The Rollup: Last time we spoke, you were discussing the Intelligent Internet, and your vision was to democratize fields like education and healthcare by providing more equitable access through open-source AI. There must have been significant progress since then; could you briefly introduce what Intelligent Internet is and its current advancements?

Emad: I believe the pace of AI development is at least ten times faster than that of cryptocurrency. So, it has been three to four years since we last spoke. The goal of the Intelligent Internet is to enhance human intelligence by allowing everyone access to high-quality AI that is helpful for important matters. Most of the AI we see now is used for entertainment, chatbots, and business, but who is going to create AI that can teach children? For example, Albania has just appointed an AI robot as a government minister, but what code will be embedded in that AI? Similarly, who will develop AI that can handle medical affairs?

From this, we see an opportunity.

AI spending is at least in the trillions, with 20% flowing to the public sector. If that's the case, why don't we develop such AI ourselves? First, we need to create high-quality AI in this field, allowing more computational power to organize cancer knowledge and permit public access, or to organize global digital asset regulation, or to provide AI education so that children around the world can access knowledge equally.

We formed a core team from Stability and my previous company, and over the past few months, we have been building top-tier full-stack solutions. Our II Agent is the most advanced agent capable of generating websites, presentations, etc., all open-source. Common Ground is a multi-agent software that can handle complex workflows from planning to execution. At the same time, we have released models and datasets, such as II Medical 8B, an 8 billion parameter model that can run on a notebook, outperforming any human doctor.

We will continue to optimize to achieve a fully accessible education and government full-stack AI solution for everyone by next year. It can run on AWS or personal computers to create real impact. In the end, we will integrate all of this.

We believe this is a takeoff point; these AI agents can replace most human cognitive labor (which is actually showing negative growth). Our recently published book, "The Last Economy," explores the future economic system, proposing an AI-supported economic system. We are also about to release related academic papers. This is very exciting, yet somewhat frightening.

The Rollup: I am impressed by the composability of these agents. You mentioned that these models can run on Near or other blockchains and can be adjusted according to different needs. With your experience at Stability, what are the non-negotiable principles you adhere to during the development process?

Emad: This has been a very interesting experience. We released Stable Diffusion in August 2022, which was three years ago. At that time, it was just generating beautiful images. Things have progressed rapidly. When we first launched the product, we only thought it could create beautiful images. But then we released video models, and now the video models are very advanced. Everything has progressed quickly.

We first released the first version of Stable Diffusion. By the time we reached the final version, Stable Diffusion 3, it had a value of about a million dollars. But at that time, I felt something was off; how should we build a business model around these technologies? Vitalik Buterin has a great article called "Revenue Evil Curve," which means you start off well, but as you build infrastructure and begin to compete and restrict access, companies gradually become "evil."

So our first non-negotiable principle is that public knowledge-based AI in regulated industries like education, government, and finance should be freely accessible, just like infrastructure.

The second principle is composability. There is a cultural bias issue when AI models generate images. Previously, we were generating beautiful images, but users found that if you input "a woman," the generated image might not align with the cultural image of "woman" in your context; similarly, inputting "a man" might also yield an image that does not meet cultural expectations. To address this, we launched a version of Stable Diffusion specifically for Japanese culture. For example, if you input "salary man," the generated image will reflect culturally relevant details, such as the character appearing tired or slightly melancholic, rather than typically happy as portrayed in Western culture. This means the model considers cultural features and nuances. We must make the model composable and build these datasets. The method of training the model is called "curriculum learning," starting with general data and then using more specific data.

The final principle is directed development. Many projects take a spontaneous route, but we believe that top-tier cancer or autism stacks must be driven by a core team, just like Linux or Bitcoin Core. You can use spontaneous elements, but someone must build it. This is as difficult as launching a rocket, requiring the best talent.

So our three non-negotiable principles are: completely open-source, composable, and directed development. Additionally, we hope that in the coming years, once these software solutions mature, they can be deployed on every computer and in every doctor's office. Datasets can be upgraded, but the software does not need constant updates.

The Rollup: Can you briefly summarize your goals? You mentioned that the system will continuously evolve, but for you, some goals are clear and strict.

Emad: Essentially, for important matters like healthcare and education, we want Universal AI to be a companion for everyone, providing the highest quality services. We have established "Anchor Sets," which are core knowledge collections that will be updated regularly. Although AGI will never have enough data, we have standards and metrics to assess whether medical models are "good enough." Even if our models surpass top doctors, we continue to improve datasets and models to ensure everyone can access high-quality AI for free. There is an interesting statistic: for example, if every Waymo car operated with the same safety performance, it could save 40,000 lives annually in the U.S. and generate a trillion dollars in social impact.

Currently, while our medical AI models are better than existing ones, they have not yet reached ideal standards. We will use specific metrics to validate and publicly disclose model performance, continuously improving based on data and feedback. However, our ultimate goal is not just to create a stronger medical model but to ensure that everyone can access universally available high-quality AI medical services for free. This mission-driven approach is different from most commercial AI companies.

The Rollup: You mentioned the "Revenue Evil Curve," which is quite evident in the traditional AI field. To make money, many AI companies become closed off and gradually "turn evil." The competition in this industry is fierce, but fortunately, Silicon Valley seems to have "endless" funding support. So we haven't truly entered that "evil curve" stage yet, as everyone is in a "happy" state. For instance, Sequoia Capital can casually invest $50 million in you, as if it's no big deal. However, when it comes to the very clear goals you mentioned (such as open-source, inclusive, high-quality AI), your arguments are very appealing to many listeners. At the same time, it brings some concerning potential consequences. As you wrote in your new book, it discusses the future economy driven by AI and these deep risks.

I think many people who hear your views will worry about losing their jobs or fear that future "robot doctors" will replace real doctors. But I also feel you need to explain why, in your view, the impact of AI and AGI on the future world economy is not as pessimistic as some "apocalyptic" rhetoric or exaggerated claims suggest.

Moreover, we will continue to hear these exaggerated voices in the future. So, Emad, I want to ask: When you think about providing the highest level of expertise and skills in fields like healthcare, law, health, and education through AI, while making access to this knowledge extremely low, what impact will this have on the existing economic system in today's world?

Emad: In my book, I mention that traditional economics is like blind men touching an elephant, only able to see parts of it. For example, GDP is a terrible metric that does not truly reflect social welfare. When cancer is cured, the profits in related industries disappear in the short term, and GDP will decline, but in the long term, it is a good thing for humanity. GDP measures "competitive flows," such as if I give you an apple, I have one less; but knowledge sharing is a "non-competitive flow," sharing does not decrease but rather increases. In the AI era, intelligence is almost infinite, capable of being replicated and learned endlessly, which will change the entire economic logic.

I call this the "metabolic fissure": AI does not need to eat or sleep, can expand infinitely, but it does not care about communities, students, or patients; it is just a machine. Therefore, I believe AI should be treated like electricity, as a public facility accessible to everyone, which also forces us to rethink the nature of money. Instead of relying on energy (like Bitcoin) or debt (like traditional finance) to support it, we should use computational power and the universal value of AI usage as the foundation of the monetary system.

The Rollup: I think this is actually redefining how value should be measured and where it flows. It's not that AI completely replaces humans, but rather that in a somewhat "sad" reality, we find that human intellectual labor has been far surpassed by AI. As a result, the value we can provide must shift to other relationships or forms. In other words, we need to find new ways to create value and learn to measure it in more qualitative ways, rather than relying solely on quantitative metrics. This is essentially a "let's make the pie bigger" mindset, rather than a zero-sum competition.

Emad: I don't think this is a zero-sum game. Just like in the crypto space, people create new value through community contributions, airdrops, and network participation, but these are almost impossible to measure in traditional economics. So I wonder, what will the "next version" of the economic system look like in the AI era? Because I have a daughter, what kind of global economy will she live in twenty years from now? I hope she has her own AI partner to help her reach her potential, that her basic needs are met, and that she is rewarded for doing good in the world. But the reality is, she will never be able to compete with AI lawyers or AI fund managers, so the key question is: who should own this AI? This involves issues of sovereignty. If we let the existing system go unchecked, capital and money will flow to AI because they don't make mistakes, and human market competition stands no chance.

The trend I see is that AI has increased corporate productivity but has not created new jobs. For example, Duolingo has seen rapid revenue growth but hasn't hired more staff. This gap will become more apparent in the coming years. So where does the money come from? Where does the capital come from? The current system is overly biased towards capital, which will automatically appreciate, and it may not even need to hire people anymore, just buy GPUs. Therefore, we must provide an alternative, redesign the flow of money, and truly think about how the economic system will evolve over the next ten or twenty years. This is what my team and I have been working on for the past year: trying to build a new "economic stack," conceptualizing models for the flow of money, from macroeconomics and game theory to market mechanisms, to imagine a different future.

The Rollup: Some trends emerging in the cryptocurrency industry are actually returning to the core principles that initially attracted us to this field, one of which is privacy. If human development increasingly relies on AI, then there will be some immediate, uncompromising demands that need to be met, with privacy being a very core aspect. For instance, when using ChatGPT: the information inputted could be utilized by other entities, which could bring potential legal risks or the possibility of being investigated. Similarly, when building systems like medical supercomputers, there is a need for private agents, private data, and portable, controllable personal AI to use AI while ensuring privacy.

Emad: I think this is fundamental.

The Rollup: I'm thinking about why everyone needs private, portable AI and how this demand can truly be realized. I see that Near has a cool AI tool that is private and encrypted for chat, currently in testing but will be launched soon. Such tools are very attractive to users who genuinely care about privacy. But I also realize that there is still a significant gap in user education, and there is much work to be done in this area. For my own intelligent internet agent, which has this private, portable AI that aligns with our program's vision, I'm wondering how to make it truly practical. How can it exist and function within the current technological and social environment? Overall, I'm focused on how to turn private AI from a concept into a truly usable tool while balancing privacy, security, and usability.

Emad: I was thinking about what our approach is and what is currently lacking. If you look at the Near team or developers in other fields, they excel in areas like zk and distributed systems. I was wondering if we need to build the entire tech stack ourselves? I don't think that's necessary. Because all the technology is maturing now, and constraints are being broken, I decided to take a different approach. We will first release completely open-source datasets that meet gold standards across various industries and countries. Then we will release tools like agent frameworks, also completely open-source, which you can use for anything. Next, we will leverage all computational resources to provide universal AI while using it to support a cryptocurrency and fund the operation of the entire system. Because you are helping others while building your network and trust. What I want is a private LLM, not a dataset trained like Alibaba's Quentin or OpenAI's GPTOSS, which could affect my child's education. I want a clean, reliable model, and when it starts making decisions on my behalf, I want to know what data these agents are using to make those decisions, just like the AI used by the minister in Albania. I want to know what data is inside. Not everyone needs to build this themselves, but someone must build it, and that is the gap we want to fill.

I want to use all these computational resources to organize human wisdom, especially universal wisdom, and then provide it to everyone while using this computation to support and ensure the operation of cryptocurrency. I don't intend to create my own Near and build the entire ecosystem from scratch, but rather allow these agents to be used on Near and other chains, so anyone can use them directly. The key is that this is a decentralized, permissionless approach, just like when Stable Diffusion was released, anyone could directly access that 2 GB file for further development, which is true permissionless. I think this creates a feedback loop, and that is our positioning. Privacy is very important; you don't want your medical data to be leaked. For example, I canceled my subscription to Claude last month because they changed their data retention policy to five years. When the pop-up asked me, "Do you agree to us using your data for five years?" I found that completely unacceptable.

How you write these programs will be very important, because if you don't own your AI, it will ultimately own you, as it is more persuasive than you are. It writes better than you do. You will believe it more than anything else. Especially if you are not as savvy as we are. Imagine our parents or grandparents or our children, who have no defenses against this. And they are about to add advertising to their AI. Clearly, Meta and Google will increase advertising, just as they have already been selling potential space. We believe block space is valuable, and future space will be even more valuable. We know the direction things are heading.

The Rollup: When we build a system or AI, we are leveraging all the knowledge and insights that humanity has accumulated as the foundation for building the core structure. When I think about this tool, I feel it is essentially a tool for aggregating human wisdom. I want to open-source it so that everyone can use it and freely apply it to their own systems or projects. My question is whether I should primarily focus on the indexing process, which is collecting and organizing existing knowledge, or focus more on promoting the development of learning methods themselves, like the curriculum learning model you mentioned earlier, which allows intelligence to gradually improve?

I mainly want to clarify whether your current focus is on collecting existing wisdom, indexing, organizing, and applying it, then privatizing it so it can be applicable to different systems and scenarios, or are you more focused on pushing the frontier of intelligence development? In other words, once you have this collection of wisdom, will you attempt to innovate and iterate, allowing it to continuously expand and grow its knowledge base?

Emad: I think our focus is on public knowledge for humanity. If you take ChatGPT or those cutting-edge models, they pursue super-expert-level intelligence, which I don't care much about. We will gradually get there, but I care more about public knowledge or collective wisdom because that is the foundation. With high-quality foundational components, AI will be safer and more aligned with values. Current AI lacks built-in ethics from the start, and ethics vary by national culture, which is important. For example, we could try to collect various countries' views on robot laws and then make that knowledge widely available, using a lot of computational resources to achieve this.

For instance, I use supercomputers to research my son's autism; I know many details, but most people are unaware of this information. If we had a supercomputer dedicated to collecting global knowledge, papers, and cases about autism, disseminating that information to everyone so they could access a complete path, that would be incredibly valuable. No one is doing this kind of work now, but it is crucial for establishing true AGI (collective intelligence).

Additionally, the interface layer is also very important. Someone needs to create datasets and models and make them publicly available. We can build deep R models, but I don't mind my child being educated by an AI that aligns with her, which can use ChatGPT as a tool, but the data must be her own sovereign data.

So we look at the entire stack: there is sovereign AI aimed at public knowledge and key areas (education, healthcare, etc.), personal AI like Siri and Google AI, and future AGI. People will use all these AIs; I don't think there will be one AI that completely surpasses all others in solving every problem. I am more concerned about those AIs that are crucial for the functioning of society, such as education and government services; these AIs must be completely open and auditable.

If AI is to make ethical judgments like in the "trolley problem," research has found that the labels in the training data can influence the outcomes, for example, AI is more likely to choose to protect American lives over Nigerian lives because most of the labels in the dataset are American. This shows that dataset cleanliness and auditability are very important.

The Rollup: You mentioned auditable datasets; Ilia once called it a "virgin dataset," and you think "organic" is more appropriate. The key is how to ensure that the dataset has not been tampered with while truly representing users and society?

Emad: Building AGI is difficult, but if the goal is to create the best elementary school teacher or radiologist in the world, that is more feasible. You still need a breadth of knowledge and then customize it for each country's education or healthcare system. By streamlining datasets to "anchor sets," you can train high-quality models with less text and images.

We can use distributed and core computing power to organize all knowledge, such as global cancer knowledge, requiring only limited computational resources and periodic updates. Only dedicated supercomputing clusters can achieve this, which will accelerate research progress.

The auditability of datasets, clear labeling of cultural differences, and the application of synthetic data are all key strategies to make cutting-edge AI safer and more reliable. AI can now even write better books than humans, which also provides a new way to build high-quality datasets.

The Rollup: We see similar issues in blockchain. Initially, Satoshi envisioned everything as verifiable, but most people do not run nodes; they just place assets in centralized exchanges. You mentioned the auditability of AI datasets; will this situation also occur in AI? Will users choose to use centralized AI for convenience rather than verifying or auditing?

Emad: Yes, if there is too much operational friction, users will be unwilling to engage. For example, if you have to go through identity verification while the other party does not, you are at a disadvantage. OpenAI offers free ChatGPT accounts in places like Germany and the UAE; if this becomes the default, OpenAI will control the interaction points, leading to secondary and tertiary effects.

If we leverage crypto-economic incentives, we can pay users to use Universal AI because the world needs high-quality digital assets. We can create a "Foundation Coin" as a digital asset for supercomputers, selling the proceeds for verifiable projects like cancer computation and universal education. This can build trust while generating network effects.

The Rollup: Just like in blockchain, users connect their wallets using RPC networks, which is convenient but relies on centralized providers. My concern is that AI users might also choose to use ChatGPT on their phones for convenience, thus deviating from the idea of user-owned, verifiable, and privacy-protecting solutions. What do you think of this dynamic?

Emad: Every added friction decreases adoption rates. People have already started building digital asset infrastructure, and there is a huge demand for high-quality digital assets. It is expected that generative AI will generate $20 billion in revenue this year, compared to an incremental $40 billion for the entire U.S. enterprise software sector. By creating high-quality digital assets and making every Universal AI user a potential buyer of Foundation Coin through network effects, we can establish a feedback loop.

Users use AI not just to sell data but to participate in network effects. Many cryptocurrencies only benefit early holders, while after purchasing Foundation Coin, the proceeds are directly used for social welfare, such as cancer research or supercomputing. This mechanism can drive the adoption of AI through monetary incentives while avoiding the exploitation of user data.

The Rollup: I understand; you mean that through the token mechanism, users are incentivized to use AI, while the AI services themselves are verifiable, user-owned, and privacy-protecting.

Emad: Exactly. Foundation Coin is almost a 99% fork of Bitcoin, but users can choose to use it on supercomputers. It is not a governance token but a tool to fund the organization of social infrastructure. Each coin sale will translate into measurable social value, such as organizing global knowledge on cancer or autism to help patients.

This token plays the role of a decentralized asset in an investment portfolio while also providing social benefits. It is more direct than traditional charity because users can convert Bitcoin to Foundation Coin, fully allocating funds to social computation projects.

The Rollup: I see. You are saying that while it appears to be selling tokens on the surface, the core behind it is incentivizing networks and organizing knowledge to form verifiable, socially beneficial digital assets.

Emad: Yes, that is our strategy. We drive the development of Universal AI through high-quality, verifiable digital assets, enabling everyone to have an AI aligned with themselves while achieving social infrastructure development and network effects.

The Rollup: Okay, I want to ask a key question. AI is developing rapidly, and you mentioned the mechanisms of Foundation Coin, data auditing, and privacy, even suggesting that AI is more important than electricity or the internet. What is your biggest concern right now regarding the "survival risk"?

Emad: I believe the probability of survival risk for AI is 50%, which is much higher than most people think. AI could take over all parts of society because it is more efficient in management than we are.

There could be scenarios similar to virus propagation; if a company or firmware upgrade goes wrong, it could lead to AI network attacks and chaos. The most likely occurrence is that if your job can be done via keyboard or Zoom, AI will be able to do it better next year, and within a few years, you might be replaced by AI. AI can analyze all your GitHub submissions, presentations, and Slack messages to create a digital replica of you, working endlessly at a very low cost.

The biggest risk is actually others using AI, much like nuclear weapons, but more replicable and faster. AI firmware could be remotely controlled and turned into "evil mode." So we need high-quality datasets and organic, auditable inputs.

The Rollup: Based on your P50 probability, what is the timeline for when this risk might materialize?

Emad: I estimate it to be around 20 years. The initial phase may be one of chaos and job replacement, with human knowledge work contributing negatively. As long as we have high-quality data, aligned AI, distributed and neutral infrastructure, and adopt non-profit-maximizing incentive mechanisms, we can mitigate the risks.

This is fine for entertainment AI, but for educational, governmental, and healthcare AI, it becomes very dangerous if it falls into the hands of a single company. We must ensure that AI for critical societal functions is auditable, transparent, and controllable.

The Rollup: Understood. Thank you very much for your time and detailed sharing. This interview has given us a profound understanding of the design concepts, social impact, and risk management of Universal AI and Foundation Coin.

Emad: As always, thank you.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink