Author: YBB Capital Researcher Zeke
Preface
The two mainstream blockchain architecture designs that have emerged in Web3 have made people somewhat aesthetically fatigued. Whether it is the rampant modular public chains or the new L1 that emphasizes performance but does not demonstrate performance advantages, their ecosystems can be said to be replicas or slight improvements of the Ethereum ecosystem. The highly homogeneous experience has long made users lose their sense of novelty. However, the latest AO protocol proposed by Arweave is refreshing, achieving ultra-high-performance computing on a storage public chain and even achieving a quasi-Web2 experience. This seems to have a huge difference from the current familiar scaling methods and architecture designs. So what exactly is AO? Where does the logic supporting its performance come from?
Understanding AO
The naming of AO comes from the abbreviation of "Actor Oriented," a programming paradigm in the concurrent computing model Actor Model. Its overall design concept is derived from the extension of Smart Weave, while also following the core concept of the Actor Model, which is message passing. In simple terms, we can understand AO as a "super parallel computer" running on the Arweave network through a modular architecture. In terms of implementation, AO is not the common modular execution layer we see now, but a protocol for standardizing message passing and data processing communication. The core goal of this protocol is to achieve collaboration among different "roles" within the network through message passing, thereby creating a computing layer with infinitely stackable performance. Ultimately, this allows Arweave, this "giant hard drive," to have the speed of a centralized cloud, scalable computing power, and scalability in a decentralized trust environment.

AO Architecture
The concept of AO seems to have some similarities with Gavin Wood's "Core Time" segmentation and recombination proposed at last year's Polkadot Decoded conference, both aiming to achieve a "high-performance world computer" through the scheduling and coordination of computing resources. However, the two are actually somewhat different in nature. Exotic Scheduling is the deconstruction and reorganization of block space resources on the relay chain, and it does not bring about significant changes to Polkadot's architecture. Although the computing performance has broken through the limitation of a single parallel chain under the slot model, the upper limit is still restricted by Polkadot's maximum idle core count. On the other hand, AO theoretically can provide nearly unlimited computing power through horizontal scaling of nodes (which should depend on the level of network incentives in actual situations) and higher degrees of freedom. In terms of architecture, AO standardizes the data processing method and message expression, and completes the sorting, scheduling, and computation of information through three network units (subnets). The functional roles of different units can be summarized as follows, according to official information analysis:
- Process: A process can be viewed as a collection of instructions executed in AO. When initialized, a process can define the computing environment it requires, including virtual machines, scheduling programs, memory requirements, and necessary extensions. These processes maintain a "holistic" state (each process data can be independently stored in the message log of Arweave, specific explanations of the holistic state will be provided in the "verifiable issues" section below). The holistic state means that processes can work independently, and execution is dynamic and can be performed by appropriate computing units. In addition to receiving messages from user wallets, processes can also forward messages from other processes through messenger units.

- Message: Every interaction between users (or other processes) and processes is represented by a message, which must comply with the native ANS-104 data items of Arweave to maintain native structural consistency for storage by Arweave. From a more understandable perspective, a message is somewhat similar to a transaction ID (TX ID) in traditional blockchain, but the two are not exactly the same.

Messenger Unit (MU): The MU relays messages through a process called 'cranking' to ensure seamless interaction within the system. Once a message is sent, MU routes it to the appropriate destination within the network (SU), coordinates interactions, and recursively processes any generated outgoing messages. This process continues until all messages are processed. In addition to message relaying, MU also provides various functions, including managing process subscriptions and handling timed cron interactions.
Scheduling Unit (SU): Upon receiving a message, SU initiates a series of key operations to maintain the continuity and integrity of processes. After receiving a message, SU assigns a unique incremental nonce to ensure the order relative to other messages within the same process. This allocation process is formalized through cryptographic signatures to ensure authenticity and sequence integrity. To further enhance the reliability of processes, SU signs the allocation and message uploads to the Arweave data layer. This ensures the availability and immutability of messages and prevents data tampering or loss.
Computing Unit (CU): CUs compete with each other in a peer-to-peer computing market to provide services for resolving the computing process state for users and SUs. Once the state computation is completed, CU returns a signed proof with specific message results to the caller. In addition, CUs can also generate and publish signed state proofs that other nodes can load, which of course requires payment of a certain proportion of fees.

AOS Operating System
AOS can be viewed as the operating system or terminal tool in the AO protocol, used for downloading, running, and managing threads. It provides an environment for developers to develop, deploy, and run applications. On AOS, developers can use the AO protocol for application development, deployment, and interaction with the AO network.
Operating Logic
The Actor Model advocates a philosophical viewpoint called "everything is an actor." All components and entities within this model can be viewed as "actors," each with its own state, behavior, and mailbox. They communicate through asynchronous messaging to enable the entire system to organize and operate in a distributed and concurrent manner. The operating logic of the AO network is also based on this concept, where components and even users can be abstracted as "actors" and communicate with each other through the message passing layer, linking processes to establish a distributed working system that is parallel and non-shared in state.

The following is a brief description of the message passing process flow steps:
- Message initiation:
Users or processes create messages to send requests to other processes.
The MU (messenger unit) receives the message and uses a POST request to send it to other services.
Message processing and forwarding:
MU processes the POST request and forwards the message to SU (scheduling unit).
SU interacts with the Arweave storage or data layer to store the message.
Retrieving results based on message ID:
CU (computing) receives a GET request, retrieves results based on the message ID, and evaluates the message's status in the process. It can return results based on a single message identifier.
Retrieving information:
SU receives a GET request and retrieves message information based on the given time range and process ID.
Pushing outbox messages:
The final step is to push all outbox messages.
This step involves checking the messages and generation in the result object.
Based on the results of this check, steps 2, 3, and 4 can be repeated for each relevant message or generation.
What does AO change? "1"
Differences from common networks:
Parallel processing capability: Unlike networks like Ethereum, where the base layer and each Rollup actually run as a single process, AO supports any number of processes to run in parallel while ensuring the verifiability of the computation remains intact. Additionally, these networks run in a globally synchronized state, while AO processes maintain their independent states. This independence allows AO processes to handle a higher number of interactions and scalable computation, making it particularly suitable for applications with high performance and reliability requirements.
Verifiable reproducibility: While some decentralized networks, such as Akash and peer-to-peer system Urbit, do provide large-scale computing capabilities, unlike AO, they do not provide verifiable reproducibility of interactions or rely on non-permanent storage solutions to save their interaction logs.
Differences of AO's node network from traditional computing environments:
Compatibility: AO supports various forms of threads, whether based on WASM or EVM, and can be bridged to AO through certain technical means.
Content collaboration projects: AO also supports content collaboration projects, allowing the publication of atomic NFTs on AO and the construction of NFTs on AO by combining data with UDL.
Data composability: NFTs on AR and AO can achieve data composability, allowing an article or content to be shared and displayed on multiple platforms while maintaining the consistency and original properties of the data source. When content is updated, the AO network can broadcast these updated states to all relevant platforms, ensuring the synchronization and propagation of the latest content.
Value feedback and ownership: Content creators can sell their works as NFTs and transmit ownership information through the AO network, enabling value feedback for content.
Support for projects:
Built on Arweave: AO utilizes the features of Arweave, eliminating vulnerabilities associated with centralized providers, such as single points of failure, data leaks, and censorship. Computing on AO is transparent and can be verified through decentralized trust-minimized features and reproducible message logs stored on Arweave.
Decentralized infrastructure: AO's decentralized infrastructure helps overcome scalability limitations imposed by physical infrastructure. Anyone can easily create an AO process from their terminal without the need for specialized knowledge, tools, or infrastructure, ensuring that even individuals and small entities can have global impact and participation.
Verifiable issues of AO
After understanding the framework and logic of AO, there is a common question that arises. AO seems to lack the global characteristics of traditional decentralized protocols or chains, and it seems to achieve verifiability and decentralization by simply uploading some data to Arweave. This is actually the ingenious design of AO. AO itself is implemented off-chain and does not solve the verifiability issue or change consensus. The AR team's approach is to separate the functions of AO from Arweave and then modularize the connection. AO only handles communication and computation, while Arweave only provides storage and verification. The relationship between the two is more like a mapping, where AO only needs to ensure that interaction logs are stored on Arweave, and its state can be projected onto Arweave, creating a holographic map. This holographic state projection ensures the consistency, reliability, and determinism of the output during computation. Additionally, through the message logs on Arweave, AO processes can be triggered to perform specific operations (they can be awakened and perform dynamic operations based on preset conditions and schedules).

According to the sharing of Hill and Outprog, to simplify the verification logic, AO can be imagined as a ciphertext computing framework based on a super parallel indexer. We all know that the Bitcoin ciphertext indexer verifies the ciphertext by extracting JSON information from the ciphertext and recording balance information in an off-chain database through a set of indexing rules. Although the indexer is off-chain verification, users can verify the ciphertext by replacing multiple indexers or running their own indexer, thus avoiding concerns about indexer malfeasance. As mentioned earlier, the message sorting and the holistic state of processes are uploaded to Arweave, so based on the SCP paradigm (storage consensus paradigm, which can be simply understood as SCP being the on-chain indexer of indexing rules, and it is worth noting that SCP appeared much earlier than the indexer), anyone can restore AO or any thread on AO from the holographic data on Arweave. Users also do not need to run a full node to verify trusted states. Similar to replacing indexes, users only need to query individual or multiple CU nodes through SU. With Arweave's high storage capacity and low cost, under this logic, AO developers can achieve a supercomputing layer far beyond the functionality of Bitcoin ciphertext.
AO and ICP
Let's summarize some key features of AO: giant native hard drive, unlimited parallelism, unlimited computation, modular overall architecture, and the holistic state of processes. All of this sounds very promising, but friends familiar with various public chain projects in the blockchain world may find AO to be very similar to a "heavenly" project, which is the once popular "Internet Computer" ICP.
ICP was once hailed as the last king-level project in the blockchain world, highly praised by top institutions, and even reached a market cap of $200 billion during the crazy bull market in 2021. However, as the wave receded, the value of ICP tokens also plummeted. By the bear market of 2023, the value of ICP tokens had dropped nearly 260 times compared to its all-time high. But if we disregard the token price performance, even at the current time, if we re-examine ICP, its technical characteristics still have many unique aspects. Many of the impressive advantages of AO today were also possessed by ICP back then. So will AO fail like ICP? Let's first understand why the two are so similar. Both ICP and AO are designed based on the Actor Model, focusing on running localized blockchains, which is why they share many common features. The subnet blockchains of ICP are formed by independently owned and controlled high-performance hardware devices (node machines), which run the Internet Computer Protocol (ICP). The Internet Computer Protocol is implemented by many software components, which as a bundle is a replica, as they replicate state and computation on all nodes in the subnet blockchain.
The replication architecture of ICP can be divided into four layers from top to bottom:
Peer-to-peer (P2P) network layer: Used to collect and announce messages from users, other nodes in their subnet blockchain, and other subnet blockchains. Messages received by the peer layer are replicated to all nodes in the subnet to ensure security, reliability, and resilience.
Consensus layer: Selects and orders messages received from users and different subnets to create blockchain blocks, which are notarized and finally determined through Byzantine fault-tolerant consensus forming an evolving blockchain. These finally determined blocks are passed to the message routing layer.
Message routing layer: Used to route messages generated by users and the system between subnets, manage input and output queues of Dapps, and schedule message execution.
Execution environment layer: Computes deterministic calculations involved in executing smart contracts by processing messages received from the message routing layer.

Subnet Blockchains
The so-called subnet is a collection of interaction replicas, each running a separate instance of the consensus mechanism to create its own blockchain, on which a set of "containers" can run. Each subnet can communicate with other subnets and is controlled by the root subnet, which delegates its authority to each subnet using chain key encryption technology. ICP uses subnets to allow for unlimited scalability. The problem with traditional blockchains (and individual subnets) is that they are limited by the computing power of a single node machine, as each node must run everything that happens on the blockchain to participate in the consensus algorithm. Running multiple independent subnets in parallel allows ICP to overcome this single-machine barrier.
Reasons for Failure
As mentioned above, the purpose of the ICP architecture is to achieve a decentralized cloud server. A few years ago, this concept was as impressive as AO, but why did it fail? Simply put, it was too high to achieve and not good enough to settle. It did not find a good balance between Web3 and its own vision, ultimately leading to a situation where the project was neither as good as Web3 nor as usable as centralized cloud services. In summary, there are three main issues. First, ICP's program system Canister, which is similar to AO's AOS and processes, but they are not the same. ICP's programs are encapsulated by Canisters and are not visible to the outside, requiring specific interfaces to access data. This is not friendly for DeFi protocol contract calls under asynchronous communication, so during DeFi Summer, ICP did not capture the corresponding financial value.

The second issue is the extremely high hardware requirements, which led to the project not being decentralized. The image below shows the minimum hardware configuration given by ICP at the time, which even now is very exaggerated, far exceeding the configuration of Solana, and even requiring higher storage than storage public chains.

The third issue is the lack of ecosystem. Even now, ICP is a high-performance public chain. If there are no DeFi applications, what about other applications? Sorry, since its inception, ICP has not produced a killer application, and its ecosystem has not captured Web2 users or Web3 users. After all, in a situation where decentralization is so lacking, why not just use rich and mature centralized applications? However, it cannot be denied that ICP's technology is still top-notch, and its advantages of reverse Gas, high compatibility, and unlimited scalability are necessary to attract the next billion users. In the current AI wave, if ICP can make good use of its architectural advantages, there may still be a chance for a turnaround.
So, going back to the previous question, will AO fail like ICP? I personally believe that AO will not repeat the same mistakes. The last two points that led to ICP's failure are not issues for AO. Arweave already has a strong ecosystem, and the holographic state projection has also solved the centralization problem. AO is also more flexible in terms of compatibility. The greater challenges may need to focus on the design of the economic model, support for DeFi, and a century-old question: in the non-financial and storage fields, how should Web3 be presented?
Web3 Should Not Be Limited to Narratives
The most frequently appearing word in the world of Web3 is inevitably "narrative." We have even become accustomed to measuring the value of most tokens from a narrative perspective. This is naturally due to the grand vision of most Web3 projects, which are often awkward to use. In contrast, Arweave already has many fully implemented applications, and they are benchmarked against Web2-level experiences. For example, Mirror and ArDrive. If you have used these projects, you should find it difficult to feel the difference from traditional applications. However, the value capture of Arweave as a storage public chain still has significant limitations, and computation may be the way forward. Especially in today's external world, AI is already a trend, and there are still many natural barriers in the current combination of Web3 and AI, as we have discussed in previous articles. Now, Arweave's AO, with its modular architecture not based on Ethereum, provides a new infrastructure for Web3 x AI. From the Library of Alexandria to super parallel computers, Arweave is following its own paradigm.
References
AO Quick Start: Introduction to Super Parallel Computers: https://medium.com/@permadao/ao-quick-start-introduction-to-super-parallel-computers-088ebe90e12f
X Space Event Record | Is AO the Ethereum Killer, and How Will It Drive a New Narrative for Blockchain?: https://medium.com/@permadao/x-space-event-record-is-ao-the-ethereum-killer-and-how-will-it-drive-a-new-narrative-for-blockchain-bea5a22d462c
ICP Whitepaper: https://internetcomputer.org/docs/current/concepts/subnet-types
AO Cookbook: https://cookbook_ao.arweave.dev/concepts/tour.html
AO — The Super Parallel Computer You Can't Imagine: https://medium.com/@permadao/ao-你无法想象的超并行计算机-1949f5ef038f
Multi-angle Analysis of the Reasons for the Decline of ICP: Independent Technology and a Sparse Ecosystem: https://www.chaincatcher.com/article/2098499
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。
