
MEJ毛毛姐|Jul 08, 2025 03:31
After carefully reading the article by @ 0G_Foundation about the advantages of multi model consensus in @ 0G_labs, as an early builder, I have gained some new insights that I would like to share with everyone:
When it comes to the design of blockchain, most people tend to have a preconceived notion that a "universal" consensus mechanism should be found, as if this would be able to encompass all requirements. But in the emerging new world of decentralized AI (deAI), reality is closer: there is no single consensus that can perfectly support the entire AI stack.
@The idea of 0G_labs is very interesting: instead of forcing a consensus, it's better to have multiple consensus architectures and shared staking
Its underlying layer is not actually a simple chain, but a separate selection of the most suitable consensus or verification mechanism for different levels (decentralized storage, on chain inference execution, etc.). Then, all security is tied together through a unified Shared Staking.
There are several benefits to this:
Different networks can share validators and staking status, with consistent security levels.
Storage, data availability, computing... each do their own thing, using the consensus that suits them best, without compromising on the same 'universal solution'.
🌟 What practical advantages does that much consensus bring?
one ️⃣ Task segmentation is more detailed, and the most suitable solution is used to verify the most suitable task
Each stage of AI has different performance requirements, and a single consensus is always a compromise, while multiple consensus can accurately allocate resources.
two ️⃣ Unlimited expansion
Traditional single chain often gets stuck in throughput. 0G can distribute the load across multiple networks for parallel processing, truly achieving horizontal scalability.
three ️⃣ Even if there is a problem, it won't affect the whole body
Multi consensus can partially self heal, even if a network crashes, it will not affect the entire system.
Even better, 0G is not just building consensus, but also verifying whether everyone is working diligently
For example:
On the storage side, PoRA (Random Access Proof) is used to require nodes to quickly read random 256KB data blocks to prove that they have actually stored data, while also preventing miners from outsourcing storage.
On the computing side, use PoI (Proof of Inference) to ensure that AI inference and training tasks are truly executed on chain nodes.
Optimistic OpML is suitable for fraud proof during challenging periods.
Zero knowledge zkML can be verified without leaking data.
TEEML is guaranteed through a hardware trusted environment.
🤖 Ultimately, this is still a story of 'specialization over generalization'
0G directly abandoned the so-called single silver bullet scheme and chose to use:
1. Shared staking of multi-layer shards,
2. High throughput PoS BFT,
3. PoRA ensures storage security,
4. Quorum signature is used for data availability,
From consensus, validation to incentives, they are all tailored specifically for the diverse needs of AI.
This can provide future decentralized AI developers and users with a transparent, scalable, and more trustworthy infrastructure.
I prefer to see it as an executable design rather than a grand vision like a white paper. 0G has opened up an interesting path for decentralized AI in terms of network security, economic incentives, and scalability. In the future, we can continue to focus on how its multi consensus and verification layers can run.
@michaelh_0g @0g_CN @0G_Foundation @Jtsong2 @Ada_0g @vanessaaal7 @KaitoAI
Share To
HotFlash
APP
X
Telegram
CopyLink