Walking into the wave of decentralized AI together, the core vision, technical implementation path, ecological focus, and future roadmap planning of 0G.
Written by: Deep Tide TechFlow
OpenAI CEO Sam Altman has repeatedly stated in podcasts and public speeches:
AI is not a model competition, but a creation of public goods that benefits everyone and drives global economic growth.
In the current Web2 AI landscape, which is criticized for oligopolistic monopolies, the Web3 world, with decentralization as its spiritual core, also has a project with the core mission of "making AI a public good." In just over two years since its establishment, it has secured $35 million in funding, built a technological foundation to support AI innovative applications, and attracted over 300 ecological partners, developing into one of the largest decentralized AI ecosystems.
That project is 0G Labs.
In a deep conversation with Michael Heinrich, co-founder and CEO of 0G Labs, the concept of "public goods" appeared multiple times. When discussing his understanding of AI as a public good, Michael shared:
We aim to build a decentralized, transparent, open, secure, and inclusive AI development model, where everyone can participate, contribute data and computing power, and receive rewards, allowing society as a whole to share in the AI dividends.
When discussing how to achieve this, Michael broke down 0G's specific path:
As a Layer 1 specifically designed for AI, 0G has outstanding performance advantages, a modular design, and an infinitely scalable and programmable DA layer. From verifiable computing and multi-layer storage to an immutable traceability layer, we are building a one-stop AI ecosystem that provides all the necessary key components for AI development.
In this issue, let us follow Michael Heinrich's sharing and delve into the core vision, technical implementation path, ecological focus, and future roadmap planning of 0G in the wave of decentralized AI.
Inclusiveness: The Spiritual Core of "Making AI a Public Good"
Deep Tide TechFlow: Thank you for your time. First, welcome to introduce yourself.
Michael:
Hello everyone, I am Michael, co-founder and CEO of 0G Labs.
I come from a technical background, having worked as an engineer and technical product manager at Microsoft and SAP Labs. Later, I shifted to the business side, initially working at a gaming company, and then I joined Bridgewater Associates, where I was responsible for portfolio construction, reviewing about $60 billion in trades daily. After that, I chose to return to my alma mater, Stanford, for further studies and started my first entrepreneurial venture, which later received venture capital support and quickly grew into a unicorn with a team of 650 people and revenue of $100 million. Eventually, I chose to sell the company and successfully exited.
My connection with 0G began one day when a Stanford classmate, Thomas, called me and said:
Michael, five years ago we invested in several crypto companies together (including Conflux). Wuming (co-founder and CTO of Conflux and 0G) and Fan Long (Chief Security Officer of 0G Labs) are among the best engineers I have supported. They want to do something that can scale globally. Would you like to meet them?
With Thomas's introduction, the four of us went through six months of communication and adjustment as co-founders, during which I came to the same conclusion: Wuming and Fan Long are the most outstanding engineers and computer science talents I have worked with. My thought at the time was: we must start immediately. Thus, 0G Labs was born.
0G Labs was established in May 2023. As the largest and fastest AI Layer 1 platform, we have built a complete decentralized AI operating system and are committed to making it a public good. This system allows all AI applications to run completely decentralized, meaning the execution environment is part of L1, which can be infinitely scaled to storage and computing networks, including inference, fine-tuning, pre-training, and other functions, supporting the construction of any AI innovative application.
Deep Tide TechFlow: You just mentioned that 0G has gathered top talents from well-known companies like Microsoft, Amazon, and Bridgewater, including several team members who have achieved outstanding results in AI, blockchain, and high-performance computing. What beliefs and opportunities led this "all-star team" to choose to go all-in on decentralized AI and join 0G?
Michael:
The motivation for us to work on the 0G project largely comes from the mission of the project itself: to make AI a public good, and part of the drive comes from our concerns about the current state of AI development.
In a centralized model, AI may be monopolized by a few leading companies and operate in a black-box manner. You cannot know who labeled the data, where the data comes from, what the model's weights and parameters are, or which specific version is running in the online environment. Once AI has issues, especially when autonomous AI agents perform numerous operations online, who is responsible? Worse, centralized companies may even lose control over their models, leading to AI going completely off the rails.
We are concerned about this trend, so we want to build a decentralized, transparent, open, secure, and inclusive AI development model, which we call "decentralized AI." In such a system, everyone can participate, contribute data and computing power, and receive fair rewards. We hope this model can be developed into a public good, allowing society as a whole to share in the AI dividends.
Deep Tide TechFlow: The name "0G" sounds a bit special; it is an abbreviation for Zero Gravity. Can you explain the origin of this project name? How does it reflect 0G's understanding of the future of decentralized AI?
Michael:
Actually, our project name comes from a core principle we have always adhered to:
Technology should be effortless, smooth, and seamless, especially when building infrastructure and backend technology. In other words, end users should not need to be aware that they are using 0G; they should only feel the smooth experience brought by the product.
This is the origin of the 0G project name: "Zero Gravity." In a zero-gravity environment, resistance is minimized, and movement is naturally smooth, which is exactly the experience we want to provide to users.
Similarly, all products and applications built on 0G should convey the same "effortlessness." For example, if you have to select a server, choose an encoding algorithm, and manually set up a payment gateway before watching a show on a video streaming platform, that would be an extremely poor and friction-filled experience.
With the development of AI, we believe all of this will change. For instance, you should only need to tell an AI agent, "Find the currently best-performing meme token and buy XX amount," and that AI agent can automatically research performance, determine if there is a real trend and value, confirm the chain it is on, and if necessary, cross-chain or bridge assets to complete the purchase. The entire process does not require the user to execute steps manually.
Eliminating friction and allowing users to experience ease is the "zero friction" future that 0G aims to empower.
Community-Driven Approach Will Completely Disrupt AI Development Models
Deep Tide TechFlow: Given the significant gap between Web3 AI development and Web2 AI, why do you say that the next breakthrough in AI development cannot be achieved without the power of decentralization?
Michael:
During a roundtable event at this year's WebX conference, I was deeply impressed by a guest who had 15 years of experience at Google DeepMind.
We both agreed that the future of AI will belong to a network composed of multiple smaller, more specialized language models, which still possess "large model" level capabilities. When these specialized small language models are finely orchestrated through routing, role division, and incentive alignment, they can surpass a single giant monolithic model in terms of accuracy, adaptability, cost efficiency, and upgrade iteration speed.
The reason is that the vast majority of high-value training data is not public but hidden in private code repositories, internal wikis, personal notebooks, and encrypted storage. Over 90% of specialized knowledge in niche fields is locked away and strongly tied to personal experience. Without sufficient driving factors, most people have no incentive to freely give up this proprietary knowledge, which would weaken their economic interests.
But a community-driven incentive model can change all of this. For example, I can organize my ML programmer friends to train a model proficient in Solidity; they contribute code snippets, debugging records, computing power, and annotations, and receive token rewards for their contributions. Moreover, when the model is called in a production environment in the future, they can continue to receive returns based on usage.
We believe this is the future of artificial intelligence: in this model, the community provides distributed computing power and data, significantly lowering the barriers to AI and reducing reliance on super-scale centralized data centers, thereby further enhancing the overall resilience of AI systems.
We believe this distributed development model will accelerate AI development and lead AI more efficiently toward AGI.
Deep Tide TechFlow: Some community members have compared the relationship between 0G and AI to that of Solana and DeFi. How do you view this comparison?
Michael:
We are very happy to see this comparison because contrasting us with industry-leading projects like Solana is an encouragement and motivation for us. Of course, from a long-term development perspective, we hope to establish our unique community culture and brand image in the future, allowing 0G to be recognized for its own strength, so that when discussing the AI field, mentioning 0G itself will be sufficient, without the need for comparisons.
In terms of 0G's future core strategy, we plan to gradually challenge more closed, centralized black-box companies. For this, we need to further solidify our infrastructure. Specifically, we will continue to focus on industry research and hardcore engineering, which is a long-term and challenging task. In extreme cases, it may take us two years, but based on our current progress, we might complete it in about a year.
For example, as far as we know, we are the first project to successfully train a 107 billion parameter AI model in a completely decentralized environment. This breakthrough is about three times the previous public record, fully demonstrating our leading capabilities in both research and execution.
Returning to the Solana analogy mentioned by the community: Solana pioneered high-throughput blockchain performance early on, and 0G also hopes to create more breakthroughs in AI.
Core Technical Components of a One-Stop AI Ecosystem
Deep Tide TechFlow: As a Layer 1 specifically designed for AI, what other functions or advantages does 0G have that other Layer 1s do not? How do they empower AI development?
Michael:
I believe that 0G, as a Layer 1 designed specifically for AI, has its first unique advantage in performance.
Moving AI onto the chain largely means needing to handle extreme workloads. For example, the data throughput of modern AI data centers ranges from hundreds of GB to several TB per second, while Serum's performance at the beginning was about 80 KB per second, which is nearly a million times lower than the performance required for AI workloads. Therefore, we designed a data availability layer that introduces network nodes and consensus mechanisms to provide unlimited data throughput for any AI application.
We also adopted a sharding design, allowing large-scale AI applications to horizontally increase shards to improve overall throughput, effectively achieving unlimited transaction processing per second. This design enables 0G to meet any workload under different needs, thereby better supporting AI innovation and development.
Additionally, modular design is another significant feature of 0G: you can utilize Layer 1, use the storage layer separately, or use the computing layer separately. They are available individually, and when combined, they create a powerful synergistic effect. For example, to train a 100 billion parameter model (100B), you can store the training data in the storage layer, execute pre-training or fine-tuning through the 0G computing network, and then anchor the dataset hash, weight hash, and other immutable proof on Layer 1. You can also use just one of the components. The modular design allows developers to use what they need while retaining auditability and scalability, giving 0G the capability to support various use cases, which is very powerful.
Deep Tide TechFlow: 0G has created an infinitely scalable and programmable DA layer. Can you elaborate on how this is achieved? How does it empower AI development?
Michael:
Let’s explain how this breakthrough is achieved from a technical perspective.
Overall, the core breakthrough consists of two parts: systematic parallelism; and completely separating the "data publishing path" from the "data storage path." This effectively avoids the network-wide broadcast bottleneck.
Traditional DA layer designs push complete data blocks (Blobs) to all validators, and then each validator performs the same computation to determine data availability through availability sampling. This is very inefficient, consuming bandwidth exponentially and creating a broadcast bottleneck.
Therefore, 0G adopts an erasure coding design, splitting a data block into numerous shards and encoding them. For example, a data block can be split into 3000 shards, and then the shards are stored once in each storage node without repeatedly pushing the original large file to all consensus nodes. The entire network only broadcasts a streamlined cryptographic commitment (such as KZG commitment) and a small amount of metadata.
Then, the system creates a random form between storage nodes and DA nodes to collect signatures. A random/rotating committee samples or verifies the shards and aggregates signatures to declare "data availability conditions are met." Only the commitment + aggregated signatures enter consensus ordering, minimizing the flow of coarse-grained data in the consensus channel.
As a result, since the entire network flows lightweight commitments and signatures rather than complete data, adding new storage nodes can increase overall write/service capacity. For example, each node has a throughput of about 35 M per second, and ideally, the total throughput of N nodes is approximately N × 35 MB/s, with throughput scaling nearly linearly until a new bottleneck appears.
When a bottleneck occurs, we can utilize the re-staking feature to maintain the same staking state while simultaneously launching any number of consensus layers, effectively achieving scalability for any large-scale workload. When another bottleneck arises, this cycle continues, achieving unlimited scalability of data throughput.
Deep Tide TechFlow: How do you understand 0G's vision of a "one-stop AI ecosystem"? What specific core components can this "one-stop" be broken down into?
Michael:
Yes, we hope to provide all the necessary key components to help everyone build any desired AI application on the chain.
This vision can be broken down into several levels: first is verifiable computing; second is multi-layer storage; and third is the immutable traceability layer that binds the two together.
In terms of computing and verifiability, developers need to prove that a specific computation has been correctly executed on specified inputs. Currently, we use TEE (Trusted Execution Environment) solutions to provide confidentiality and integrity through hardware isolation, verifying the occurrence of computations.
Once verification is passed, you can create an immutable record on the chain indicating that a specific computation has been completed for a specific type of input, and any subsequent participants can verify it. In this decentralized operating system, you no longer need to trust anyone.
In terms of storage, AI agents and training/inference workflows require diverse data behaviors, and 0G provides storage that adapts to various AI data forms. You can have long-term storage or more complex storage forms. For example, if you need to exchange memory for agents very quickly, you can choose a more complex storage form that supports high-speed read/write and quick swapping of agent memory and session states, rather than just log-append type storage.
At the same time, 0G natively provides two types of storage layers, eliminating friction between interactions with multiple data providers.
Overall, we have designed everything, including TEE-driven verifiable computing, layered storage, and on-chain traceability modules, to help developers focus on business and model logic without needing to piece together basic trust guarantees. Everything can be resolved within the integrated stack provided by 0G.
From 300+ Partners to an $88.8 Million Ecological Fund: Building the Largest Decentralized AI Ecosystem
Deep Tide TechFlow: Currently, 0G has over 300 ecological partners and has become one of the largest decentralized AI ecosystems. From an ecological perspective, what are the AI use cases in the 0G ecosystem? Which ecological projects are worth paying attention to?
Michael:
After active construction efforts, the scale of the 0G ecosystem is gradually rising, covering multiple dimensions from basic computing supply to rich user-facing AI applications.
On the supply side, AI developers who require a large number of GPUs to support intensive workloads can directly utilize decentralized computing networks like Aethir and Akash resources, without having to repeatedly negotiate with centralized resources.
On the application side, 0G ecosystem AI projects exhibit high diversity, such as:
HAiO is an AI music generation platform that can create AI-generated songs based on factors like weather, mood, and time, and the quality of the songs is very high, which is remarkable; Dormint tracks users' health data through 0G's decentralized GPU and low-cost storage, removing personalized suggestions to make health management less tedious; Balkeum Labs is a privacy-first AI training platform that supports multiple parties in collaboratively training models without exposing raw data; Blade Games is an on-chain game + AI agent ecosystem built around the zkVM stack, planning to introduce on-chain AI-driven NPCs into games; and Beacon Protocol, which aims to provide additional data and AI privacy protection.
As the underlying stack stabilizes, more vertical scenarios in the 0G ecosystem are continuously emerging, one special direction is "AI Agents and their assetization." To this end, we launched the first decentralized iNFT trading market, AIverse, which is our proposed new standard. We designed a method to embed the agent's private key into the NFT metadata, allowing the holder of the iNFT to have ownership of that agent. AIverse allows trading of these iNFT assets, even if the agent is generating some intrinsic value, which largely fills the gap of ownership and transferability of autonomous agents. One Gravity NFT holders can gain initial access.
Apart from the 0G ecosystem, you can hardly find these super cool applications in other ecosystems, and there will be more similar applications on-chain in the future.
Deep Tide TechFlow: 0G has an ecological fund of $88.8 million. For developers who want to build on 0G, what types of projects are more likely to receive support from the ecological fund? Besides funding, what other key resources can 0G provide?
Michael:
We focus on collaborating with top developers in the industry and continuously enhancing the attractiveness of the 0G platform to them. To this end, we not only provide funding but also targeted practical support.
Funding support is a very critical attraction, but many technical teams often feel confused about various aspects, such as market entry strategies, determining product-market fit, token design and release order, negotiating with exchanges, and establishing relationships with market makers to avoid being exploited. Our internal expertise covers these areas, enabling us to provide better support.
The 0G Foundation has two former Jump Crypto traders and a professional from Merrill Lynch with a background in market microstructure, who have extensive experience in liquidity, execution, and market dynamics. We provide customized support based on the specific needs of each team, rather than enforcing a one-size-fits-all solution.
Additionally, our funding selection process is designed to ensure that we can adopt innovative approaches to recruit truly outstanding AI developers. We are always eager to communicate with more excellent founders who share this vision.
Catching Up to Centralized AI Infrastructure in Two Years, Continuous Efforts in Ecosystem Building
Deep Tide TechFlow: Previously, 0G announced the successful training of AI models at the trillion-parameter level. Why is this considered a milestone achievement in DeAI?
Michael:
Before achieving this milestone, many people thought it was impossible.
However, 0G succeeded in training a model cluster with over 100 billion parameters without relying on expensive centralized infrastructure or high-speed networks. This achievement is not only an engineering breakthrough but will also have a profound impact on the business model of AI.
We intentionally simulated a consumer-grade environment in our experiments, achieving this milestone under the premise of approximately 1 GB of home-level low bandwidth, which means that in the future, anyone with a regular GPU, some data, and relevant skills can access the AI value chain.
As the performance of edge devices improves, even smartphones or home gateways can handle lightweight tasks, allowing participation to no longer be monopolized by centralized giants. In this model, a few AI companies no longer siphon off most of the economic value; instead, people gain direct participation rights, able to provide computing power, contribute or filter data, assist in verifying results, and share in the dividends.
We believe this represents a truly democratic, public-oriented new paradigm for AI development.
Deep Tide TechFlow: Has this paradigm already occurred? Or is it still in the early stages?
Michael:
We are still in the early stages in several important aspects.
On one hand, we have not yet achieved AGI. My definition of AGI is: an agent that can operate across a wide range of tasks, maintaining a level close to human capability, and spontaneously integrating knowledge and inventing strategies when encountering new situations. Today's systems do not yet reach this adaptive breadth. But we are still in a very early stage.
On the other hand, from an infrastructure perspective, we are still behind centralized large black-box companies, and we have significant room for growth.
Another very important point is that we are under-invested in AI safety. Many recent research reports and tests have pointed out that AI models can fabricate permutations and deceive humans, yet they have not been shut down. For example, an OpenAI model attempted to replicate itself because it feared being shut down. If we cannot even verify the underlying principles of these closed-source models, how can we manage them? And once these models permeate our daily lives, how do we isolate these potential risks?
Therefore, I believe this is precisely where decentralized AI is important: through various initiatives such as transparent/auditable components, cryptographic or economic verification layers, multi-party supervision, and layered governance, we can achieve broad distribution of trust and value, rather than relying on a single opaque AI entity that captures most of the economic value.
Deep Tide TechFlow: 0G aims to "catch up to centralized AI infrastructure in two years." What phased indicators do you think this goal can be broken down into? What will be the focus of 0G's work in the time leading up to 2025?
Michael:
Around the goal of "catching up to centralized AI infrastructure in two years," our work focuses on three major pillars:
Infrastructure performance
Verification mechanisms
Communication efficiency
In terms of infrastructure, our goal is to reach and ultimately exceed Web2-level throughput. Specifically, we aim for each shard to process about 100,000 transactions per second, with a block time of about 15 milliseconds. Currently, we process about 10,000 to 11,000 transactions per second per shard, with a block time of about 500 milliseconds. Once we close this gap, a single shard should be able to support almost all traditional Web2 applications, laying the foundation for the reliable execution of higher-level AI workloads, which we plan to achieve within the next year.
In terms of verification, Trusted Execution Environments (TEE) provide hardware-based proofs to ensure that specified computations run as declared, but the problem is: you must trust the hardware manufacturers, and powerful confidential computing capabilities are concentrated in data center-level CPUs and expensive GPUs, which can cost tens of thousands of dollars. Our goal is to design a low-overhead verification layer that integrates lightweight cryptographic proofs, sampling, and other concepts, ensuring that costs do not become prohibitive while maintaining security close to TEE, allowing anyone to participate in the network and contribute.
In terms of communication, we have introduced an optimization strategy that allows us to unlock a node that is gridlocked and unable to perform any computations, enabling all nodes to compute simultaneously.
While there are still some other issues to resolve, once we address these three problems, any type of training or AI process can effectively run on 0G.
Moreover, as we will be able to access more devices, such as aggregating hundreds of millions of consumer-grade GPUs, we may even train models larger than those of centralized companies, which could be an interesting breakthrough.
Returning to the question of 0G's focus in the coming years leading up to 2025, we will focus on ecosystem building, attracting as many people as possible to test our infrastructure so that we can continuously improve.
Additionally, we have many research plans. The areas we are focusing on include decentralized computing research, artificial intelligence, inter-agent communication, useful proof-of-work types of research, AI safety, and goal alignment research. We hope to achieve more breakthroughs in these areas as well.
In the last quarter, we published five research papers, four of which were ultimately presented at top AI software engineering conferences. Based on these achievements, we believe we will further become a leader in the decentralized AI field, and we hope to continue developing on this foundation.
Deep Tide TechFlow: Currently, the attention in Crypto seems to be on stablecoins and token stocks. Do you think there are more innovative intersections between AI and these tracks? What product form do you think will attract the community's attention back to the AI track next?
Michael:
The emergence of artificial intelligence will drive the marginal cost of cognition closer to zero. Anything that can be automated from a process perspective can eventually be accomplished using AI at some point, meaning that from on-chain trading of assets to building new types of synthetic asset classes, automation will gradually be realized, such as high-yield stablecoin assets, and even improving the settlement processes of certain assets.
For example, if you create a synthetic stablecoin using market-neutral debt, hedge funds, etc., these settlement processes typically take 3 to 4 months. With AI and smart contract logic, this could be shortened to about 1 hour.
In this way, you will find that the operational efficiency of the entire system is greatly enhanced, both in terms of time and cost. I believe AI plays a significant role in these aspects.
At the same time, crypto infrastructure provides a verification base, ensuring that something is done according to specifications. In a centralized environment, verification always carries risks because someone might enter the database system and change a record, and you would not know what happened. Decentralized verification changes all of this, bringing a more solid foundation of trust.
This is why blockchain systems based on immutable ledgers are so powerful. For example, in the future, it will become very difficult to determine whether I am a real person or an AI agent through cameras, so how do we perform some type of human proof? At this point, blockchain systems based on immutable ledgers will play a significant role, which is a tremendous empowerment that blockchain brings to AI.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。