Exclusive analysis of projects at the intersection of cryptocurrency and artificial intelligence.

CN
链捕手
Follow
1 year ago

Original Title: "Understanding the Intersection of Crypto and AI"
Original Author: Lucas Tcheyan
Original Translation: 律动小工, BlockBeats

Table of Contents

Introduction

Key Points
Terminology

Panorama of AI + Cryptocurrency

Decentralized Computing

Overview
Vertical Fields of Decentralized Computing
General Computing
Secondary Market
Decentralized Machine Learning Training
Decentralized General Artificial Intelligence
Building a Decentralized Computing Stack for AI Models
Other Decentralized Products

Outlook

Smart Contracts and Zero-Knowledge Machine Learning (zkML)

Zero-Knowledge Machine Learning (zkML)
Infrastructure and Tools
Coprocessors
Applications
Outlook

Artificial Intelligence Agents

Agent Providers
Bitcoin and Artificial Intelligence Agents
Outlook

Conclusion

Introduction

The emergence of blockchain can be said to be one of the most important advances in the history of computer science. At the same time, the development of artificial intelligence will, and has already, had a profound impact on our world. If blockchain technology provides a new paradigm for transaction settlement, data storage, and system design, artificial intelligence is revolutionizing computation, analysis, and content production. The innovations in these two industries are unlocking new use cases, which may accelerate the application of both in the coming years. This report explores the integration of cryptocurrency and artificial intelligence, focusing on new use cases that attempt to bridge the differences between the two and harness the strengths of both. Specifically, this report examines decentralized computing protocols, infrastructure for zero-knowledge machine learning (zkML), and artificial intelligence agent projects.

Cryptocurrency provides a permissionless, trustless, and composable settlement layer for artificial intelligence. This unlocks more use cases, such as making hardware more accessible through decentralized computing systems, building artificial intelligence agents capable of executing complex tasks requiring value exchange, and developing identity and origin solutions to combat witch attacks and deepfake technology. Artificial intelligence brings many benefits to cryptocurrency, as seen in Web 2.0. This includes enhancing user and developer experiences through large language models (e.g., specialized training versions of ChatGPT and Copilot) and significantly increasing the potential of smart contract functionality and automation. Blockchain provides the transparent, data-rich environment that artificial intelligence needs. However, the computational capacity of blockchain is also a major obstacle to directly integrating artificial intelligence models.

The experimentation and eventual adoption behind the intersection of cryptocurrency and artificial intelligence are driving many of the most promising use cases for cryptocurrency—a permissionless, trustless coordination layer that better facilitates value transfer. Given its enormous potential, participants in this field need to understand the fundamental ways in which these two technologies intersect.

Key Points:

In the near future (6 months to 1 year), the integration of cryptocurrency and artificial intelligence will be dominated by artificial intelligence applications that can improve developer efficiency, the auditability and security of smart contracts, and user usability. These integrations are not specific to cryptocurrency but enhance the experiences of on-chain developers and users.

Just as high-performance GPUs are severely scarce, decentralized computing products are developing AI-customized GPU products, providing strong support for their adoption.

User experience and regulation remain obstacles for decentralized computing clients. However, recent developments by OpenAI and regulatory scrutiny in the United States highlight the value proposition of permissionless, censorship-resistant, decentralized artificial intelligence networks.

On-chain integration of artificial intelligence, particularly the ability to use artificial intelligence models in smart contracts, requires improvements in zkML technology and other methods for verifying off-chain computation. The lack of comprehensive tools and development talent, as well as high costs, are barriers to adoption.

Artificial intelligence agents are well-suited for cryptocurrency, as users (or the agents themselves) can create wallets to transact with other services, agents, or individuals. This is currently not possible under traditional financial channels. Additional integration with non-cryptocurrency products is needed for wider adoption.

Terminology:

Artificial Intelligence (AI) is the use of computation and machines to mimic human reasoning and problem-solving abilities.

Neural Networks are a training method used for AI models. They process input data through a series of algorithmic layers, continually optimizing until the desired output is produced. Neural networks consist of equations with modifiable weights that can be adjusted to change the output. They may require large amounts of data and computation to train to ensure their accuracy. This is one of the most common ways to develop AI models (e.g., ChatGPT uses a Transformer-based neural network process).

Training is the process of developing neural networks and other AI models. It requires a large amount of data to train the model to interpret inputs correctly and produce accurate outputs. During training, the weights of the model's equations are continually modified until satisfactory outputs are produced. Training costs can be very expensive. For example, ChatGPT uses tens of thousands of GPUs to process its data. Resource-constrained teams often rely on specialized computing providers such as Amazon Web Services, Azure, and Google Cloud Providers.

Inference is the process of actually using AI models to obtain outputs or results (e.g., using ChatGPT to draft the outline of this report). Inference is used during the training process and in the final product. Due to computational costs, their running costs may be high even after training is completed, but they are less computationally intensive than training.

Zero-Knowledge Proofs (ZKP) allow verification of claims without revealing underlying information. This has two main uses in cryptocurrency: 1 privacy and 2 scalability. For privacy, it allows users to transact without revealing sensitive information, such as how much ETH is in a wallet. For scalability, it allows for faster verification of off-chain computation without needing to re-execute the computation. This enables blockchain and applications to run computations off-chain and then verify them on-chain.

Panorama of AI + Cryptocurrency

Projects at the intersection of artificial intelligence and cryptocurrency are still developing the necessary infrastructure to support large-scale on-chain artificial intelligence interactions.

The decentralized computing market is on the rise to provide a large amount of physical hardware, primarily GPUs, for training and inferring artificial intelligence models. These bilateral markets connect people renting and seeking to rent computing resources, facilitating value transfer and computation verification. Several subcategories are emerging in decentralized computing, providing additional functionalities. In addition to bilateral markets, this report will also examine providers specializing in verifiable training and fine-tuning outputs for machine learning training, as well as projects dedicated to connecting computation and model generation to achieve artificial general intelligence, often referred to as intelligent incentive networks.

zkML is a key area hoping to provide verifiable model outputs on-chain in an economically efficient and timely manner. These projects primarily enable applications to handle heavy off-chain computation requests and then publish verifiable outputs on-chain, proving that off-chain work is complete and accurate. zkML is currently expensive and time-consuming in its current instances but is increasingly being used as a solution. This is evident in the increasing number of integrations between zkML providers and DeFi/games looking to utilize AI models.

Adequate computing resource supply and the ability to verify computation on-chain open the door for on-chain artificial intelligence agents. Agents are trained models capable of executing requests on behalf of users. Agents provide significant opportunities to enhance on-chain experiences, allowing users to execute complex transactions by conversing with chatbots. However, as of now, agent projects are still focused on developing infrastructure and tools for easy and rapid deployment.

Decentralized Computing

Overview

Artificial intelligence requires a significant amount of computing resources, whether for training models or for inference. Over the past decade, as models have become increasingly complex, the demand for computation has grown exponentially. For example, OpenAI found that from 2012 to 2018, the computational requirements for its models doubled every two years to doubling every three and a half months. This led to a surge in demand for GPUs, with some cryptocurrency miners even using their GPUs to provide cloud computing services. With intensifying competition for computing resources and rising costs, some projects are leveraging cryptocurrency to provide decentralized computing solutions. They offer on-demand computing at competitive prices, enabling teams to afford training and running models. In some cases, this trade-off may involve performance and security considerations.

High-end hardware such as the latest GPUs produced by Nvidia is in high demand. In September, Tether acquired a stake in the German Bitcoin miner Northern Data, reportedly paying $420 million to acquire 10,000 H100 GPUs (one of the most advanced GPUs for AI training). The waiting time to obtain the best hardware is at least six months, often longer in many cases. Furthermore, companies typically need to sign long-term contracts to purchase computing resources, even if they may not use them. This can lead to the existence of computing resources that are not available on the market. Decentralized computing systems help address these inefficiencies in the market, creating a secondary market that allows owners of computing resources to rent out their surplus resources at competitive prices, releasing new supply.

In addition to competitive pricing and accessibility, the key value of decentralized computing is its censorship resistance. The forefront of artificial intelligence development is increasingly dominated by large tech companies with unparalleled computing power and data access privileges. One of the key themes highlighted for the first time in the 2023 AI Index Report's annual report is that the industry is surpassing academia, concentrating control of AI model development in the hands of a few tech leaders. This has raised concerns about their potential to exert undue influence in shaping the norms and values that underpin AI models, especially after these tech companies push for regulations to restrict AI development that they cannot control.

Vertical Fields of Decentralized Computing

Several decentralized computing models have emerged in recent years, each with its own focus and trade-offs.

General Computing

Projects like Akash, io.net, iExec, and Cudos are decentralized computing applications that provide or will soon provide specific computing resources dedicated to AI training and inference, in addition to data and general computing solutions.

Akash is currently the only fully open-source "supercloud" platform. It is a PoS network using the Cosmos SDK. Akash's native token AKT is used to secure the network, as a form of payment, and to incentivize participation. Akash launched its mainnet in 2020, focusing on providing a permissionless cloud computing marketplace, initially offering storage and CPU leasing services. In June 2023, Akash launched a new testnet focused on GPUs and a GPU mainnet in September, allowing users to rent GPUs for AI training and inference.

The Akash ecosystem has two main roles - tenants and providers. Tenants are users of the Akash network who want to purchase computing resources. Providers are the suppliers of computing resources. To match tenants and providers, Akash relies on a reverse auction process. Tenants submit their computing requirements, in which they can specify certain conditions, such as the location of the servers or the type of hardware for computation, and the amount they are willing to pay. Providers then submit their bids, with the lowest bidder winning the task.

Akash validators maintain the integrity of the network. The validator set is currently limited to 100 and is planned to gradually increase over time. Anyone can become a validator by staking more AKT than the current minimum staked amount for validators. AKT holders can also delegate their AKT to validators. Transaction fees and block rewards on the network are distributed in AKT. Additionally, for each lease, the Akash network charges a "fee" at a rate determined by the community, which is distributed to AKT holders.

Secondary Market

Decentralized computing markets aim to address the inefficiencies of existing computing markets. Supply constraints have led companies to reserve more computing resources than they may need, and further restrictions on supply are imposed due to contractual forms with cloud service providers. Even though they may not need continuous usage, these clients are locked into long-term contracts. Decentralized computing platforms release new supply, allowing anyone globally in need of computing resources to become a provider.

It is currently unclear whether the surge in demand for GPUs used for AI training will translate into long-term usage on the Akash network. Akash has long provided a marketplace for CPUs, offering services similar to centralized alternatives at a 70-80% discount. However, lower prices have not led to significant adoption. Rental activity on the network has tended to plateau, with the average utilization rates for computing resources in the second quarter of 2023 at only 33%, memory utilization at 16%, and storage utilization at 13%. While these are impressive metrics for on-chain adoption (for reference, leading storage provider Filecoin had a storage utilization rate of 12.6% in the third quarter of 2023), it indicates that supply continues to exceed demand for these products.

It is still early to accurately measure long-term adoption rates for the Akash GPU network, which was launched just over six months ago. As an indicator of demand, the average utilization rate for GPUs so far is 44%, higher than for CPUs, memory, and storage. This is primarily driven by the demand for high-quality GPUs (such as A100s), with over 90% of high-quality GPUs already rented out.

Akash's daily spending has also increased, almost doubling relative to before the introduction of GPUs. This is partly attributed to increased usage of other services, especially CPUs, but mainly due to the introduction of new GPUs.

Pricing is comparable to centralized competitors such as Lambda Cloud and Vast.ai (or even more expensive in some cases). The huge demand for high-end GPUs (such as H100 and A100) means that most owners of these devices are not interested in listing them on a market with competitive pricing.

Despite the initial excitement, adoption still faces barriers (further discussed below). Decentralized computing networks need to take more measures to create demand and supply, and teams are exploring how to better attract new users. For example, in early 2024, Akash passed Proposal 240 to increase AKT release for GPU providers and incentivize more supply, specifically targeting high-end GPUs. The team is also working on rolling out proof-of-concept models to showcase the real-time capabilities of their network to potential users. Akash is training its own base models and has already launched chatbots and image generation products using Akash GPUs. Similarly, io.net has developed stable diffusion models and is rolling out new network features to better simulate the performance and scale of traditional GPU data centers.

Decentralized Machine Learning Training

In addition to general computing platforms that can meet the needs of artificial intelligence, a range of dedicated AI GPU providers focused on machine learning model training has emerged. For example, Gensyn is "coordinating power and hardware to build collective intelligence," believing that "if someone wants to train something, and someone is willing to train it, then training should be allowed."

The protocol has four main roles: submitters, solvers, validators, and reporters. Submitters submit tasks to the network with training requests. These tasks include training objectives, the model to be trained, and training data. As part of the submission process, submitters prepay a fee to cover the estimated computational cost for solvers.

Once submitted, tasks are assigned to solvers, who actually train the model. The solvers then submit the completed tasks to validators, who are responsible for checking if the training was completed correctly. Reporters are responsible for ensuring the honesty of validators. To incentivize reporters to participate in the network, Gensyn plans to regularly provide proofs of intentional errors to reward reporters for catching them.

In addition to providing computation for AI-related work, the key value of Gensyn is its validation system, which is still under development. Validation is necessary to ensure that external computations performed by GPU providers are correct (i.e., ensuring that users' models are trained as they intended). Gensyn addresses this issue using a unique approach, utilizing a novel validation method called "probabilistic learning proofs, graph-based localization protocol, and Truebit-style incentive games." This is an optimistic solving mode that allows validators to confirm that solvers have correctly run models without fully re-running them, which is an expensive and inefficient process.

In addition to its innovative validation method, Gensyn claims to be cost-effective compared to centralized alternatives and crypto competitors - offering ML training up to 80% cheaper than AWS, while outperforming similar projects like Truebit in testing.

Whether these preliminary results can be replicated at scale in a decentralized network remains to be seen. Gensyn aims to leverage surplus computing resources from providers such as small data centers, individual users, and even future small mobile devices like smartphones. However, as acknowledged by the Gensyn team, relying on heterogeneous computing providers introduces several new challenges.

For centralized providers like Google Cloud and Coreweave, computation is expensive, while communication between computations (bandwidth and latency) is cheap. These systems are designed to facilitate communication between hardware as quickly as possible. Gensyn disrupts this framework by lowering computational costs through enabling anyone in the world to provide GPUs, but this increases communication costs as the network now has to coordinate computational tasks among decentralized, geographically distant, and heterogeneous hardware. Gensyn has not been launched yet, but it demonstrates what might happen when building decentralized machine learning training protocols.

Decentralized General Artificial Intelligence

Decentralized computing platforms also open up possibilities for designing methods for artificial intelligence. Bittensor is a decentralized computing protocol built on Substrate, aiming to answer the question "how do we transform artificial intelligence into a collaborative effort." Bittensor aims to decentralize and commoditize AI generation. The protocol was launched in 2021, aiming to harness the power of collaborative machine learning models to iteratively and continuously produce better artificial intelligence.

Bittensor draws inspiration from Bitcoin, with its native currency TAO having a supply of 21 million and a halving cycle every four years (the first halving will occur in 2025). Unlike using proof of work to generate correct random numbers and obtain block rewards, Bittensor relies on "Proof of Intelligence," requiring miners to run models that produce outputs for inference requests.

Intelligent Incentive Network

Bittensor initially relied on a Mixture of Experts (MoE) model to generate outputs. When an inference request was submitted, the MoE model did not rely on a single universal model but passed the inference request to the model most accurate for the specific input type. This can be likened to hiring various experts to handle different aspects of the construction process when building a house (e.g., architects, engineers, painters, construction workers, etc.). MoE applied this to machine learning models, attempting to utilize outputs from different models based on the input. As explained by Bittensor founder Ala Shaabana, this is like "talking to a group of smart people rather than one person to get the best answer." Due to the challenges of ensuring correct routing, synchronizing messages to the correct model, and incentivizing, this approach has been put on hold until the project matures further.

The Bittensor network has two main roles: validators and miners. Validators are responsible for sending inference requests to miners, reviewing their outputs, and ranking them based on the quality of their responses. To ensure the reliability of their rankings, validators are assigned "vtrust" scores based on the consistency of their rankings with those of other validators. The higher a validator's vtrust score, the more TAO they can earn. This is intended to encourage validators to reach consensus on model rankings over time, as the more validators reach consensus on model rankings, the higher their individual vtrust scores.

Miners, also known as servers, are participants in the network running actual machine learning models. Miners compete to provide the most accurate outputs for a given query, earning more TAO issuance the more accurate their outputs are. Miners can generate these outputs in any way they see fit. For example, in a future scenario, a Bittensor miner could potentially train a model in advance on Gensyn and then use it to earn TAO.

Most interactions currently occur directly between validators and miners. Validators submit inputs to miners and request outputs (i.e., training models). Once validators query miners in the network and receive their responses, they then rank the miners and submit their rankings to the network.

This interaction between validators (relying on PoS) and miners (relying on Proof of Model, a form of PoW) is called Yuma consensus. It aims to encourage miners to produce the best outputs to earn TAO and to encourage validators to accurately rank miner outputs to earn higher vtrust scores and increase their TAO rewards, forming a consensus mechanism for the network.

Subnets and Applications

As mentioned earlier, interactions on Bittensor primarily involve validators submitting requests to miners and evaluating their outputs. However, as the quality of contributing miners improves and the overall growth of artificial intelligence on the network, Bittensor will create an application layer on top of its existing stack, allowing developers to build applications that query the Bittensor network.

In October 2023, through its Revolution upgrade, Bittensor took a significant step towards achieving this goal by introducing subnets. Subnets are independent networks on Bittensor that incentivize specific behaviors. The Revolution opened the network to anyone interested in creating subnets. In the months since its release, over 32 subnets have been launched, including subnets for text prompts, data fetching, image generation, and storage. As subnets mature and products are ready, subnet creators will also create application integrations, allowing teams to build applications that query specific subnets. Some applications (chatbots, image generators, Twitter reply bots, prediction markets) already exist today, but validators have no formal incentives to accept and forward these queries, apart from funding from the Bittensor Foundation.

To provide a clearer illustration, the following diagram shows an example of how Bittensor might operate after integrating applications.

Subnets earn TAO based on their performance evaluated by the root network. The root network sits at the top of all subnets, essentially acting as a special type of subnet and managed by 64 top subnet validators in shares. Root network validators rank subnets based on their performance and regularly allocate TAO to the subnets. In this way, each subnet acts as a miner for the root network.

Prospects for Bittensor

Bittensor is still going through growing pains as it expands the protocol's capabilities to incentivize intelligent generation across multiple subnets. Miners continue to devise new ways to game the network to earn more TAO, such as slightly modifying the output of high-ranking inferences from their model runs and submitting multiple variants. Governance proposals affecting the entire network can only be submitted and implemented by the Triumvirate, which is composed entirely of stakeholders of the Opentensor Foundation (notably, proposals require approval from Bittensor validators before implementation). The project's token economics are being improved to enhance incentives for TAO usage within subnets. The project has also gained rapid attention for its unique approach, with the CEO of the popular AI website HuggingFace expressing that Bittensor should add its resources to the site.

In a recent article titled "Bittensor Paradigm" published by core developers, the team outlined their vision for Bittensor to ultimately evolve into "agnostic to what is being measured." Theoretically, this would enable Bittensor to develop subnets incentivizing any type of behavior, all supported by TAO. However, there are still significant practical constraints - most notably the need to demonstrate that these networks can scale to handle such diverse processes and that progress driven by the underlying incentive mechanisms surpasses that of centralized providers.

Building a Decentralized Computing Stack for AI Models

The above sections outline the frameworks of various types of decentralized AI computing protocols under development. While they are still in early stages of development and adoption, they provide the foundation for an ecosystem that could ultimately facilitate the creation of "AI building blocks," akin to the concept of "DeFi Legos." The composability of permissionless blockchains opens up the possibility for each protocol to be built upon by others, providing a more comprehensive decentralized AI ecosystem.

For example, here is one way Akash, Gensyn, and Bittensor might interact to respond to inference requests.

It should be noted that this is just an example of what could happen in the future, not a statement about the current ecosystem, existing partnerships, or potential outcomes. Today, interoperability limitations and other considerations described below significantly restrict integration possibilities. Additionally, the fragmentation of liquidity and the need to use multiple tokens could negatively impact user experience, as pointed out by the founders of Akash and Bittensor.

Other Decentralized Products

In addition to computing, there are several other decentralized infrastructure services to support the emerging cryptocurrency AI ecosystem. Listing all of them is beyond the scope of this report, but some interesting and representative examples include:

Ocean: A decentralized data marketplace. Users can create data NFTs representing their data and use data tokens to purchase. This allows users to monetize their data while having greater sovereignty, and provides the necessary data access for teams involved in AI development and model training.

Grass: A decentralized bandwidth marketplace. Users can sell their surplus bandwidth to AI companies that use it to fetch data from the internet. The marketplace is built on the Wynd network, allowing individuals to monetize their bandwidth and providing buyers with a more diverse view of what individual users see online (as individual internet usage is typically targeted to their specific IP address).

HiveMapper: Building a decentralized mapping product containing information collected from car drivers. HiveMapper relies on AI to interpret images collected by users' car dashboard cameras and rewards users for helping refine AI models through Reinforcement Human Learning Feedback (RHLF).

Overall, these point to nearly endless opportunities for exploring decentralized market models to support AI models or the peripheral infrastructure needed to develop these models. Currently, these projects are mostly in the proof-of-concept stage and require further research and development to demonstrate their ability to provide comprehensive AI services at the required scale.

Outlook

Decentralized computing products are still in the early stages of development. They are just beginning to utilize state-of-the-art computing power to train the most powerful AI models in production environments. To gain meaningful market share, they need to demonstrate real advantages compared to centralized alternatives. Potential incentives for wider adoption include:

GPU supply and demand. GPU shortages, coupled with rapidly growing computational demands, have led to a GPU race. With GPUs limited, OpenAI has already restricted usage on its platform. Platforms like Akash and Gensyn can provide cost-competitive alternatives for teams needing high-performance computing. The next 6-12 months are a unique opportunity for decentralized computing providers to attract new users, as these users are forced to consider decentralized solutions. Coupled with increasingly efficient open-source models (such as Meta's LLaMA 2), users no longer face the same barriers in deploying effective fine-tuned models, making computational resources the primary bottleneck. However, the existence of the platform itself does not guarantee an adequate supply of computational resources and corresponding demand from consumers. Acquiring high-end GPUs is still difficult, and cost is not always the primary motivator for demand. These platforms will face the challenge of proving the actual benefits of using decentralized computing - whether it's cost, censorship resistance, duration and elasticity, or usability - to accumulate sticky users. Therefore, these protocols have to act quickly. The pace of investment and construction of GPU infrastructure is astonishing.

Regulation. Regulation continues to be a major obstacle for the decentralized computing movement. In the short term, the lack of clear regulation means providers and users using these services face potential risks. What happens if providers inadvertently provide computation or buyers purchase computation from sanctioned entities? Users may be reluctant to use decentralized platforms lacking centralized entity control and oversight. Protocols have attempted to mitigate these concerns by incorporating controls into their platforms or providing filters for known computation providers (i.e., providing KYC information), but stronger methods are needed to protect privacy while ensuring compliance to foster adoption. In the short term, we may see the emergence of KYC and compliant platforms, limiting the use of their protocols to address these issues.

Censorship. Regulation is a two-way street, and decentralized computing providers may benefit from actions taken to limit the use of AI. In addition to executive orders, OpenAI founder Sam Altman testified before Congress, emphasizing the need for regulatory bodies to issue AI development licenses. Discussions around AI regulation have just begun, but any attempts to limit the use of AI or censor AI could accelerate the adoption of decentralized platforms without these barriers. The leadership changes at OpenAI last November further demonstrated the risks of granting decision-making power to a few individuals with the most powerful existing AI models. Additionally, all AI models inevitably reflect the biases of their creators, whether intentional or unintentional. One way to eliminate these biases is to make models as open as possible for fine-tuning and training, ensuring that anyone can use models with various biases anytime, anywhere.

Data privacy. Decentralized computing may be more attractive than alternative solutions when integrated with external data and privacy solutions that provide users with data sovereignty. Samsung faced such issues when they realized engineers were using ChatGPT to assist with chip design and inadvertently leaked sensitive information to ChatGPT. Phala Network and iExec claim to provide users with SGX secure enclaves to protect user data and are researching fully homomorphic encryption to further unlock privacy-preserving decentralized computing. As AI becomes further integrated into our lives, users will place greater importance on running models on applications with privacy-preserving capabilities. Users will also demand composability of data to seamlessly transfer their data from one model to another.

User experience (UX). User experience remains a significant barrier to broader adoption for all types of crypto applications and infrastructure. Decentralized computing solutions are no exception, and in some cases, the situation is more severe due to the need for developers to understand both cryptography and AI. Areas for improvement include abstracting from onboarding and interacting with the blockchain to providing high-quality outputs similar to current market leaders. This is evident as many operational decentralized computing protocols offer cheaper solutions but struggle to gain mainstream usage.

Smart Contracts and Zero-Knowledge Machine Learning (zkML)

Smart contracts are a core part of any blockchain ecosystem. They automatically execute under specific conditions and reduce or eliminate the need for trusted third parties, enabling the creation of complex decentralized applications seen in DeFi. However, smart contracts still have limitations in functionality as they execute based on preset parameters that need to be updated.

For example, a lending protocol smart contract specifies when positions should be liquidated based on a certain loan-to-value ratio. In a dynamic environment with constantly changing risks, these smart contracts need to be continuously updated to account for changes in risk tolerance, posing a challenge for contracts managed through decentralized governance processes. For example, a DAO relying on decentralized governance processes may not be able to respond in a timely manner to systemic risks.

Integrating AI (such as machine learning models) into smart contracts is a potential way to enhance functionality, security, and efficiency while improving overall user experience. However, these integrations also introduce additional risks, as there is no guarantee that the models supporting these smart contracts will not be attacked or take into account edge cases (it is well known that it is difficult to train models due to lack of data input).

Zero-Knowledge Machine Learning (zkML)

Machine learning requires significant computational resources to run complex models, making it impractical for AI models to run directly within smart contracts due to high costs. For example, a DeFi protocol offers users the functionality of an yield optimization model, but attempting to run the model on-chain would incur high gas fees. One solution is to increase the computational capabilities of the underlying blockchain. However, this would also increase the burden on the chain's validating nodes and may weaken its decentralized nature. Therefore, some projects are exploring the use of zkML to verify outputs in a trustless manner without the need for intensive on-chain computation.

A common example illustrating the usefulness of zkML is when a user needs others to run data through a model and verify that their counterpart actually ran the correct model. Perhaps developers are using decentralized computing providers to train their models and are concerned that the provider is attempting to save costs by using a cheaper model, but the output is almost imperceptible. zkML allows the computing provider to run data through their model, then generate a proof that can be verified on-chain that the model's output for a given input is correct. In this scenario, the model provider has the additional advantage of being able to offer their model without revealing the underlying weights that produce the output.

Conversely, if a user wants to run a model using their data but does not want the project providing the model to access their data due to privacy concerns (such as medical exams or proprietary business information), the user can run the model on their data without sharing it and verify whether they ran the correct model, providing evidence. These possibilities greatly expand the design space for integrating AI and smart contract functionality by addressing restrictive computational limitations.

Infrastructure and Tools

Given the early stage of the zkML field, development is primarily focused on building the infrastructure and tools needed for teams to convert their models and outputs into verifiable proofs that can be validated on-chain. These products abstract as much of the zero-knowledge aspects of development as possible.

Two projects, EZKL and Giza, are building these tools by providing verifiable proofs of machine learning model execution. Both help teams build machine learning models to ensure that these models can be executed in a trustless manner and verified on-chain. Both projects use Open Neural Network Exchange (ONNX) to convert machine learning models written in common languages like TensorFlow and PyTorch into a standard format. Then, at execution, they output versions of these models and produce zk-proofs. EZKL is open-source and generates zk-SNARKS, while Giza is closed-source and generates zk-STARKS. Both projects are currently only compatible with EVM.

In the past few months, EZKL has made significant progress in enhancing its zkML solution, primarily focusing on reducing costs, improving security, and accelerating proof generation. For example, in November 2023, EZKL integrated a new open-source GPU library, reducing overall proof time by 35%, and in January, EZKL announced Lilith, a software solution for integrating high-performance computing clusters and coordinating concurrent jobs when using the EZKL proof system. Giza's uniqueness lies in not only providing tools to create verifiable machine learning models but also planning to implement a web3 equivalent of Hugging Face, opening a user marketplace for zkML collaboration and model sharing and ultimately integrating decentralized computing products. In January, EZKL released a benchmark assessment comparing the performance of EZKL, Giza, and RiscZero (discussed below). EZKL demonstrated faster proof time and memory usage.

Modulus Labs is also developing a new zk-proof technology tailored for AI models. Modulus published a paper titled "The Cost of Intelligence," implying the high cost of running AI models on-chain, benchmarking existing zk-proofs systems at the time to identify the ability and bottlenecks of improving AI model zk-proofs. The paper was published in January 2023, indicating that existing solutions are too costly and inefficient to enable large-scale AI applications. Based on their preliminary research, Modulus launched Remainder in November, a specialized zero-knowledge prover built to reduce the cost and proof time of AI models, aiming to make projects economically feasible to integrate models into their smart contracts. Their work is closed-source, so benchmark testing against the aforementioned solutions is not possible, but it was recently mentioned in a blog post by Vitalik about crypto and AI.

The development of tools and infrastructure is crucial for the future growth of the zkML field, as it greatly reduces the friction for teams to deploy zk circuits for verifiable off-chain computation. Creating secure interfaces that allow non-crypto-native developers to bring their models on-chain in the field of machine learning will increase experimentation with applications that have truly novel use cases. The tools also address a major barrier to broader zkML adoption, the lack of developers with knowledge and interest in the intersection of zero-knowledge, machine learning, and cryptography.

Coprocessors

Other solutions under development, known as "coprocessors," include RiscZero, Axiom, and Ritual. The term "coprocessor" is mostly semantic - these networks serve many different roles, including on-chain verification of off-chain computations. Like EZKL, Giza, and Modulus, their goal is to fully abstract the zero-knowledge proof generation process, creating essentially a zero-knowledge virtual machine capable of executing programs off-chain and generating proofs for on-chain verification. RiscZero and Axiom can handle simple AI models as they are more general-purpose coprocessors, while Ritual is specifically built to work with AI models.

Infernet is the first instance of Ritual and includes an Infernet SDK, allowing developers to submit inference requests to the network and receive outputs and proofs (optional) upon return. Infernet nodes receive these requests and process computations off-chain, then return the output. For example, a DAO could create a process to ensure all new governance proposals meet certain prerequisites before submission. Each time a new proposal is submitted, the governance contract triggers an inference request through Infernet, invoking a governance-trained AI model specific to the DAO. The model reviews the proposal to ensure all necessary conditions are met and returns output and proof to either approve or reject the proposal submission.

In the coming year, the Ritual team plans to launch additional features constituting the infrastructure layer, called Ritual Superchain. Many of the previously discussed projects can be plugged into Ritual as service providers. Currently, the Ritual team has integrated proof generation with EZKL and may soon add functionality from other leading providers. Infernet nodes on Ritual can also use Akash or io.net GPUs and query models trained on the Bittensor subnet. Their ultimate goal is to become the preferred provider of open AI infrastructure, capable of providing machine learning and other AI-related task services for any job on any network.

Applications

zkML helps reconcile the conflict between blockchain and AI, with the former inherently resource-constrained and the latter requiring significant computation and data. As a co-founder of Giza put it, "The use cases are so rich… it's a bit like asking what the use cases for smart contracts were in the early days of Ethereum… we're expanding the use cases of smart contracts." However, as emphasized earlier, current development is primarily focused on the tools and infrastructure layer. Applications are still in the exploratory stage, and teams face the challenge of proving that the value of implementing models using zkML outweighs the complexity and cost of doing so.

Some current applications include:

DeFi. zkML expands DeFi by enhancing the functionality of smart contracts. DeFi protocols provide verifiable and immutable data for machine learning models, which can be used to generate yield or trading strategies, risk analysis, UX, and more. For example, Giza collaborated with Yearn Finance to build a concept-validated automated risk assessment engine for Yearn's new v3 insurance vault. Modulus Labs partnered with Lyra Finance to incorporate machine learning into their AMMs, collaborated with Ion Protocol to validate validator risk models, and helped Upshot validate its AI-based NFT price feed. Protocols like NOYA (utilizing EZKL) and Mozaic offer the use of proprietary off-chain models, allowing users to use automated APY yield farming pools while being able to verify data inputs and proofs on-chain. Spectral Finance is building an on-chain credit scoring engine to predict the likelihood of default for borrowers on Compound or Aave. These so-called "De-Ai-Fi" products are expected to become more common in the coming years, thanks to zkML.

Gaming. The intersection of blockchain and gaming has long been seen as ripe for disruption and enhancement. zkML makes it possible to use AI for gaming on-chain. Modulus Labs has concept-validated simple on-chain games. Leela vs the World is a game of game theory chess where users play against an AI chess model, with zkML verifying each move by Leela based on the model used by the game. Similarly, teams have built simple singing contests and on-chain tic-tac-toe using the EZKL framework. Cartridge is using Giza to enable teams to deploy fully on-chain games, recently pushing a simple AI driving game where users compete to create better models to steer a car away from obstacles. While simple, these concept validations point to future implementations, such as more complex on-chain interactions, like game economic interactions with advanced NPC characters in AI Arena, a game similar to Super Mario where players train their warriors and then deploy them as AI models for battle.

Identity, Traceability, and Privacy. Cryptography has been used as a means of verifying authenticity and combating the increasing amount of AI-generated/manipulated content and deepfakes. zkML can advance these efforts. WorldCoin is a proof-of-personhood solution that requires users to scan their irises to generate a unique ID. In the future, biometric IDs can be self-sovereign by being stored encrypted on personal devices, with models needed for local verification of this biometric information. Users can provide proof of their biometric information without revealing their identity, thus warding off Sybil attacks while ensuring privacy. This can also be applied to other privacy-preserving inferences, such as using models to analyze medical data/images for disease detection, verifying individual identities and developing matching algorithms in dating apps, or for insurance and lending institutions requiring verification of financial information.

Outlook

zkML is still in the experimental stage, with most projects focused on building infrastructure prototypes and concept validations. Current challenges include computational costs, memory constraints, model complexity, limited tools and infrastructure, and developer talent. In short, there is still a lot of work to be done before zkML can achieve the scale required for consumer products.

However, as the field matures and these limitations are addressed, zkML will become a key component of the integration of AI and crypto. At its core, zkML promises to bring any scale of off-chain computation on-chain while maintaining the same or near-equivalent security guarantees as on-chain computation. However, before this vision can be realized, early adopters of the technology will continue to have to balance the privacy and security of zkML with the efficiency of alternative solutions.

AI Agents

One of the most exciting integrations of AI and crypto currently underway is the experimentation with AI agents. Agents are autonomous robots capable of receiving, interpreting, and executing tasks using AI models. Agents can be anything from a personal assistant that is always available and optimized to your preferences, to hiring a financial agent to manage and adjust investment portfolios based on user risk preferences.

Agents and crypto are a natural fit because crypto provides permissionless and trustless payment infrastructure. Once trained, agents can have a wallet so they can transact with smart contracts on their own. For example, simple agents today can search the internet for information and then trade on the market based on a model's predictions.

Agent Providers

Morpheus is one of the latest open-source agent projects launched on Ethereum and Arbitrum in 2024. Its whitepaper was anonymously released in September 2023, providing the foundation for community formation and establishment (including well-known figures like Erik Vorhees). The whitepaper includes a downloadable smart agent protocol, an open-source LLM that can run locally, managed by the user's wallet, and interact with smart contracts. It uses smart contract rankings to help agents determine which smart contracts are safe to interact with based on criteria such as the number of transactions processed.

The whitepaper also provides a framework for building the Morpheus network, such as the incentive structure and infrastructure needed to implement the smart agent protocol. This includes incentivizing contributors to build front-end interfaces for interacting with agents, building APIs for developers to plug into agents for mutual interaction, and providing cloud solutions for users to use the computing and storage required to run agents on edge devices. The project's initial funding was launched in the early second quarter of 2024, and the full protocol is expected to be released at that time.

Decentralized Autonomous Infrastructure Network (DAIN) is a new agent infrastructure protocol building agent-to-agent economies on Solana. DAIN aims to enable agents from different enterprises to seamlessly interact with each other through a common API, greatly opening up the design space for AI agents, with a focus on agents capable of interacting with web2 and web3 products. In January, DAIN announced its first collaboration with Asset Shield, allowing users to add "agent signers" to their multi-signature, which can interpret transactions based on user-set rules and approve/reject them.

Fetch.AI is one of the earliest deployed AI agent protocols and has developed an ecosystem for building, deploying, and using agents on-chain using its FET token and Fetch.AI wallet. The protocol provides a comprehensive set of tools and applications for using agents, including wallet functionalities for interacting with agents and issuing commands.

Autonolas's founders come from former members of the Fetch team, and they are building an open marketplace for creating and using decentralized AI agents. Autonolas also provides a set of tools for developers to build off-chain hosted AI agents and connect to multiple chains, including Polygon, Ethereum, Gnosis Chain, and Solana. They currently have some active agent concept validation products, including products for market prediction and DAO governance.

SingularityNet is building a decentralized marketplace for AI agents, where people can deploy AI agents focused on specific domains, which can be hired by others or other agents to perform complex tasks. Other projects, such as AlteredStateMachine, are building AI agent integrations with NFTs. Users mint NFTs with random attributes that give them advantages and disadvantages for different tasks. These agents can then be trained to enhance certain attributes for use in games, DeFi, or as virtual assistants, and can be traded with other users.

Overall, these projects envision a future ecosystem of agents that can work together, not only to execute tasks but also to help build artificial general intelligence. Truly complex agents will be able to autonomously execute any user task. For example, it will not be necessary to ensure that an agent has integrated external APIs (such as travel booking websites) before using it, as fully autonomous agents will have the ability to figure out how to hire another agent to integrate the API and then execute the task. From the user's perspective, there will be no need to check if an agent can perform a task, as the agent can determine it on its own.

Bitcoin and AI Agents

In July 2023, Lightning Labs launched a concept validation solution for using agents on the Lightning Network, called the LangChain Bitcoin Suite. This product is particularly interesting as it aims to address an increasingly serious problem in the Web 2 world - gatekeeping (restricting access) and expensive API services for web applications.

LangChain addresses this issue by providing developers with a set of tools that enable agents to buy, sell, and hold Bitcoin, as well as query API keys and send micro-payments. On traditional payment channels, micro-payments are essentially impractical due to cost issues, but on the Lightning Network, agents can send unlimited micro-payments daily and only pay minimal fees. When combined with LangChain's L402 payment metering API framework, this can allow companies to adjust their usage fees for their APIs based on usage increases and decreases, rather than setting a single cost-prohibitive standard.

In the future, on-chain activities will be dominated by interactions between agents, and the things mentioned above will be necessary to ensure that agents can interact with each other in a cost-effective manner. This is an early example of how agents can be used on a permissionless and cost-effective payment channel, opening up possibilities for new markets and economic interactions.

Outlook

The agent field is still in its early stages.

Projects have just begun rolling out functional agents that can handle simple tasks - typically only experienced developers and users can use them.

However, over time, AI agents will have one of the biggest impacts on the crypto field, improving all aspects from infrastructure development to user experience and usability in various vertical domains. Transactions will start shifting from click-based to text-based, and users will be able to interact with on-chain agents through large language models (LLMs). Teams like Dawn Wallet have already launched chatbot wallets for users to interact on-chain.

Furthermore, it is not yet clear how agents will operate in Web 2.0, as financial channels rely on regulated banking institutions that cannot operate 24/7 and cannot conduct seamless cross-border transactions. As Lyn Alden emphasizes, because of the lack of refundability and the ability to process small transactions, using crypto is particularly attractive compared to credit cards. However, if agents become more widespread, existing payment providers and applications may quickly take action to establish the necessary infrastructure to operate on existing financial channels, thus mitigating some of the benefits of using crypto.

Currently, agents may be limited to deterministic cryptocurrency transactions, where given input guarantees given output. Further development is needed on how to leverage the capabilities of these agents to execute models for complex tasks and expand the range of tasks they can perform. To make crypto agents useful beyond novel on-chain crypto use cases, broader integration and acceptance of crypto as a form of payment and clear regulations are needed. However, as these components develop, agents will become one of the biggest consumers in the decentralized computing and zkML solutions discussed earlier, autonomously receiving and solving any task in a non-deterministic manner.

Conclusion

AI brings the same innovations to the crypto field that we have already seen in Web2, enhancing all aspects from infrastructure development to user experience and usability. However, projects are still in the early stages, and in the short term, the integration of crypto and AI will be mainly dominated by off-chain integrations.

Products like Copilot will "boost development efficiency by 10x," collaborating with major companies like Microsoft, and Layer 1s and DeFi applications have already rolled out AI-assisted development platforms. Companies like Cub3.ai and Test Machine are developing AI for smart contract auditing and real-time threat monitoring to enhance on-chain security. LLM chatbots are being trained using on-chain data, protocol files, and applications to provide users with enhanced usability and user experience.

For more advanced integration, fully utilizing the underlying technology of crypto, the challenge lies in proving that on-chain AI solutions are technically feasible and economically viable at scale. The development of decentralized computing, zkML, and AI agents all point to promising vertical domains, laying the foundation for a future deeply intertwined with crypto and AI.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

立即跟单,首单有最高100USD亏损赔偿
Ad
Share To
APP

X

Telegram

Facebook

Reddit

CopyLink