Author | Mohamed Fouda, AllianceDAO
Translation | Cecilia, bfrenz DAO
Since the launch of ChatGPT and GPT-4, there has been a constant stream of content discussing how AI is completely changing everything, including Web3. Developers from various industries have reported a significant increase in productivity ranging from 50% to 500% by leveraging ChatGPT as a solution. For example, as an assistant to developers, it can automatically perform tasks such as generating template code, conducting unit tests, creating documents, debugging, and detecting vulnerabilities.
While this article will explore how AI achieves new and interesting use cases in Web3, it is more important to interpret the mutually beneficial relationship between Web3 and AI. Few technologies have the potential to significantly influence the trajectory of AI development, and Web3 is one of them.
Specific Scenarios of AI Empowering Web3
1. Blockchain Games
Generating bots for non-programming game players
Blockchain-based games like Dark Forest have created a unique gameplay where players can gain an advantage by developing and deploying "bots" to perform game tasks. However, this new gameplay may make non-programming players feel excluded. The emergence of Language Learning Models (LLMs) may change this situation. By understanding the rules of blockchain games and then using these rules to create "bots" that reflect player strategies without the need for players to write code. For example, projects like Primodium and AI Arena are attempting to allow human players and AI players to participate in games together without writing complex code.
Using bots for battles or betting
Another possibility for on-chain games is fully automated AI players. In this case, the player itself is an AI agent, such as AutoGPT, which uses a Language Learning Model (LLM) as a backend and can access the internet and possible initial cryptocurrency funds. These AI players can participate in betting games similar to robot wars. This will provide a market for speculating and betting on the results of these games. Such a market may give rise to a completely new gaming experience that is both strategic and appealing to a wider range of players, regardless of their programming proficiency.
Creating realistic NPC environments for on-chain games
Currently, many games often have relatively limited and predictable NPC actions that have a relatively small impact on the game process. However, through the interaction of artificial intelligence and Web3, we have the potential to create more engaging NPC environments, thereby disrupting the predictability of games and making them more interesting. One key premise is the introduction of more attractive AI-controlled NPCs.
However, when creating realistic NPC environments, we face a potential challenge: how to introduce meaningful NPC dynamics while minimizing the throughput (transactions per second, TPS) associated with these activities. High TPS required for NPC activities may lead to network congestion, affecting the actual gaming experience of real players.
Through these new gameplay and possibilities, blockchain games are evolving towards greater diversity and inclusivity, allowing more types of players to participate and experience the fun of the game together.
2. Decentralized Social Media
Currently, decentralized social (DeSo) platforms face a challenge in providing a unique user experience compared to existing centralized platforms. However, through seamless integration with AI, we can bring unique experiences to Web 2 alternatives. For example, AI-managed accounts can attract new users to the network by sharing relevant content, commenting on posts, and participating in discussions. In addition, AI accounts can aggregate the latest trends that match user interests. This integrated approach with AI will bring more innovation to decentralized social media platforms and attract more users to join.
3. Security and Economic Design Testing of Decentralized Protocols
Based on AI assistants using LLM, we have the opportunity to practically test the security and economic robustness of decentralized networks. These agents can define goals, create code, and execute this code, providing a new perspective for evaluating the security and economic design of decentralized protocols. In this process, the AI assistant is guided to exploit the security and economic balance of the protocol. It can first check the protocol documentation and smart contracts to identify potential weaknesses. Then, it can independently compete to execute attacks on the protocol mechanism to maximize its own profits. This approach simulates the actual environment that the protocol may encounter after launch. Through these test results, protocol designers can review the protocol's design and fix potential weaknesses. Currently, only professional companies like Gauntlet have the technical skills to provide such services for decentralized protocols. However, with training in Solidity, DeFi mechanisms, and past utilization mechanisms using LLM, we expect AI assistants to provide similar functionality.
4. LLM for Data Indexing and Metric Extraction
Although blockchain data is public, indexing and extracting useful insights have always been a challenge. Some participants (such as CoinMetrics) specialize in indexing data and building complex metrics for sale, while others (such as Dune) focus on indexing the primary components of raw transactions and extracting partial metrics through community contributions. Recent advances in LLM show that data indexing and metric extraction may undergo a revolution. The blockchain data company Dune has recognized this potential threat and announced a LLM roadmap that includes potential components such as SQL query interpretation and NLP-based queries. However, we predict that the impact of LLM will be more far-reaching. One possibility is LLM-based indexing, where LLM models can directly interact with blockchain nodes to index data for specific metrics. Startups like Dune Ninja have already begun exploring innovative data indexing applications based on LLM.
5. Developers Joining New Ecosystems
Different blockchain networks attract developers to build applications. Web3 developer activity is a key indicator of evaluating the success of an ecosystem. However, developers often encounter support difficulties when starting to learn and build in a new ecosystem. The ecosystem has already invested millions of dollars through dedicated Dev Rel teams to support developers in better exploring the ecosystem. In this area, the emerging LLM has shown remarkable achievements, as it can explain complex code, capture errors, and even create documents. A fine-tuned LLM can complement human experience, significantly improving the productivity of development teams. For example, LLM can be used to create documents, tutorials, answer common questions, and even provide template code or create unit tests for developers in hackathons. All of these will help promote active participation by developers and drive the growth of the entire ecosystem.
6. Improving DeFi Protocols
By integrating AI into the logic of DeFi protocols, the performance of many DeFi protocols can be significantly improved. Currently, one of the main challenges of applying AI to the DeFi field is the high cost of on-chain implementation of AI. Although AI models can be implemented off-chain, it was previously impossible to verify the execution of the model. However, projects like Modulus and ChainML are gradually making off-chain execution verification a reality. These projects allow machine learning models to be executed off-chain while limiting on-chain costs. In the case of Modulus, on-chain costs are only used for verifying zero-knowledge proofs (ZKP) of the model. In the case of ChainML, on-chain costs are used to pay oracle fees to decentralized AI execution networks.
Here are some potential use cases of DeFi that could benefit from AI integration:
AMM liquidity allocation: For example, updating the liquidity range of Uniswap V3. By integrating artificial intelligence, the protocol can intelligently adjust the liquidity range to improve the efficiency and returns of AMMs (Automated Market Makers).
Liquidation protection and debt positions: By combining on-chain and off-chain data, more effective liquidation protection strategies can be implemented to protect debt positions from market fluctuations.
Complex DeFi structured products: When designing treasury mechanisms, financial AI models can be relied upon instead of fixed strategies. Such strategies may include trades, loans, or options managed by AI, thereby increasing the intelligence and flexibility of the products.
Advanced on-chain credit scoring mechanisms: Considering different wallets on different blockchains, integrating artificial intelligence can help build a more accurate and comprehensive credit scoring system to better assess risks and opportunities.
By leveraging these AI integration use cases, the DeFi field can better adapt to changing market demands, improve efficiency, reduce risks, and create more value for users. Meanwhile, with the continuous development of off-chain verification technology, the prospects for AI applications in DeFi will further expand.
Web3 Technology Can Help Enhance the Capabilities of AI Models
Although existing AI models have shown tremendous potential, they still face challenges in data privacy, fairness in specific model execution, and the creation and dissemination of false content. In these areas, the unique advantages of Web3 technology may play an important role.
1. Creating Proprietary Datasets for ML Training
One area where Web3 can assist AI is in collaboratively creating proprietary datasets for machine learning (ML) training, namely through a Proof of Private Work (PoPW) network for dataset creation. Massive datasets are crucial for accurate ML models, but how to obtain and create these datasets can become a bottleneck, especially in use cases that require private data, such as using ML for medical diagnosis, where privacy concerns around patient data pose significant obstacles, as training these models requires access to medical records. However, for privacy reasons, patients may be unwilling to share their medical records. To address this issue, patients can verifiably anonymize their medical records to protect their privacy while still allowing these records to be used in machine learning training.
However, the authenticity of anonymized data may be a concern, as false data could greatly affect model performance. In this case, zero-knowledge proofs (ZKP) can be used to verify the authenticity of anonymized data. Patients can generate ZKP to prove that the anonymized records are indeed copies of the original records, even after removing personally identifiable information (PII). This approach both protects privacy and ensures the credibility of the data.
2. Running Inference on Private Data
Currently, there is a significant issue with large language models (LLMs) regarding how to handle private data. For example, when users interact with ChatGPT, OpenAI collects their private data and uses it for model training, leading to potential leaks of sensitive information. Recent cases have highlighted accidental leaks of classified data by employees using ChatGPT for office assistance, further emphasizing this issue. Zero-knowledge (ZK) technology holds promise in addressing the challenges that arise when machine learning models handle private data. Here, we will explore two scenarios: open-source models and proprietary models.
For open-source models, users can download the model and run it on their local private data. For example, Worldcoin's "World ID" upgrade plan ("ZKML") requires processing users' biometric data, such as iris scans, to create a unique identifier (IrisCode) for each user. In this case, users can download the machine learning model generated from the IrisCode while protecting the privacy of their biometric data and run it locally. By creating zero-knowledge proofs (ZKPs), users can prove that they have successfully generated the IrisCode, ensuring the authenticity of the inference while protecting data privacy. It is important to note that efficient ZK proof mechanisms (such as those developed by Modulus Labs) play a crucial role in training machine learning models.
In the case of proprietary models used for inference, the situation is slightly more complex because running inference locally may not be an option. However, zero-knowledge proofs can help address the issue in two possible ways. The first approach involves anonymizing user data using ZKP before sending the anonymized data to the machine learning model, as discussed earlier in the dataset creation scenario. The second approach involves locally preprocessing private data before sending the preprocessed output to the machine learning model. In this case, the preprocessing step obscures the user's private data, making it unreconstructable. Users can generate ZKPs to prove the correct execution of the preprocessing step, while the other proprietary parts of the model can be remotely executed on the model owner's server. These use cases may include AI doctors capable of analyzing patient medical records for potential diagnoses and algorithms assessing clients' private financial information for financial risk assessment.
Through ZK technology, Web3 can provide enhanced data privacy protection, making AI more secure and reliable when handling private data, while also offering new possibilities for AI applications in privacy-sensitive domains.
3. Ensuring Content Authenticity and Combating Deepfake Scams
The emergence of ChatGPT may have overshadowed the existence of generative AI models focused on creating images, audio, and videos. However, these models are currently capable of generating realistic deepfake content. For example, AI-generated portrait photos and AI-generated versions of new songs by artists like Drake have been widely circulated on social media. Given people's innate tendency to believe what they see and hear, these deepfake contents may pose potential scam risks. While some startups attempt to address this issue using Web2 technology, digital signatures and other Web3 technologies may be more effective in combating this problem.
In Web3, transactions between users are signed with the users' private keys to prove their validity. Similarly, text, images, audio, and video content can be digitally signed by the creator's private key to prove their authenticity. Anyone can verify the signature by using the creator's public address, which can be found on the creator's website or social media account. The Web3 network has established all the necessary infrastructure to meet the needs of content verification. Some investors have associated their social media profiles, such as Twitter, or decentralized social media platforms, such as Lens Protocol and Mirror, with encrypted public addresses to increase the credibility of content verification. For example, Fred Wilson, a partner at the leading investment firm USV, discussed how associating content with public encryption keys can be effective in combating misinformation.
While this concept may seem simple, there is still much work to be done to improve the user experience of the verification process. For example, the digital signing process for content needs to be automated to provide a seamless and smooth experience for creators. Another challenge is how to generate subsets of signed data, such as audio or video clips, without the need for resigning. Currently, many projects are working to address these issues, and Web3 has unique advantages in solving these problems. Through technologies like digital signatures, Web3 is poised to play a crucial role in protecting content authenticity and combating deepfake content, thereby increasing user trust and the credibility of the network environment.
4. Minimizing Trust in Proprietary Models
Web3 technology can also minimize trust in service providers when proprietary machine learning (ML) models are offered as a service. Users may want to verify the service they have paid for or obtain assurances about the fair execution of the ML model, i.e., the same model is used for all users. Zero-knowledge proofs (ZKPs) can be used to provide these assurances. In this architecture, the creator of the ML model generates a ZK circuit representing the ML model. Then, when needed, the ZK circuit is used to generate zero-knowledge proofs for the user's inference. These proofs can be sent to the user for verification or published on the public chain responsible for handling user verification tasks. If the ML model is proprietary, independent third parties can verify whether the used ZK circuit represents the model. This trust-minimizing approach is particularly useful in cases where the execution results of the model carry high risks. Here are some specific use cases:
Machine Learning Applications for Medical Diagnostics
In this scenario, patients submit their medical data to an ML model for potential diagnosis. Patients need to ensure that the target ML model does not misuse their data. The inference process can generate a zero-knowledge proof to prove the correct execution of the ML model.
Loan Credit Assessment
ZKPs can ensure that banks and financial institutions consider all financial information submitted by applicants when assessing creditworthiness. Additionally, by proving that all users use the same model, ZKPs can demonstrate fairness.
Insurance Claims Processing
Current insurance claims processing is manual and subjective. However, ML models can assess insurance policies and claim details more fairly. Combined with ZKPs, these claims processing ML models can be proven to consider all policy and claim details, and the same model is used to process all claims under the same policy.
By leveraging technologies like zero-knowledge proofs, Web3 is poised to provide innovative solutions to the trust issues of proprietary ML models. This not only helps increase user trust in model execution but also promotes fairer and more transparent transaction processes.
5. Addressing Centralization in Model Creation
Creating and training LLMs (large language models) is a time-consuming and expensive process, requiring specialized knowledge in specific domains, dedicated computing infrastructure, and millions of dollars in computing costs. These characteristics may lead to powerful centralized entities, such as OpenAI, which can exercise significant power over their users by controlling access to their models.
Given these centralization risks, important discussions are ongoing about how Web3 can promote decentralization in various aspects. Some Web3 advocates have proposed decentralized computing as a way to compete with centralized participants. This view suggests that decentralized computing can be a cheaper alternative. However, our view is that this may not be the best angle to compete with centralized participants. The drawback of decentralized computing is that ML training may be 10 to 100 times slower due to communication overhead between different heterogeneous computing devices.
One approach is to decentralize the costs and resources of model creation through decentralized computing. While some believe that decentralized computing may be a cheaper alternative to centralized entities, communication overhead issues may limit its efficiency. This means that when it comes to large-scale computing tasks, decentralized computing may lead to slower training speeds. Therefore, careful consideration of the pros and cons of decentralized computing is needed when seeking to address the centralization issue in model creation.
Another approach is to create unique and competitive ML models using Proof of Private Work (PoPW). The advantage of this approach is that it can achieve decentralization by distributing datasets and computing tasks to different nodes in the network. These nodes can contribute to model training while maintaining the privacy of their data. Projects like Together and Bittensor are moving in this direction, attempting to achieve decentralization in model creation through PoPW networks.
Payment and Execution Tracks of AI Agents
The payment and execution tracks of AI agents have garnered significant attention in recent weeks. The trend of using LLMs (large language models) to perform specific tasks and achieve goals is on the rise, originating from the concept of BabyAGI and quickly spreading to advanced versions like AutoGPT. This has led to an important prediction that AI agents will excel and become more specialized in certain tasks in the future. If a dedicated market emerges, AI agents have the ability to search, hire, and pay for the services of other AI agents, thereby collaborating on important projects.
In this process, Web3 provides an ideal environment for AI agents. Particularly in terms of payment, AI agents can configure cryptocurrency wallets to receive payments and make payments to other agents, enabling task division and collaboration. Additionally, AI agents can delegate resources without permission, inserting themselves into encrypted networks. For example, if an AI agent needs to store data, they can create a Filecoin wallet and pay for storage on the decentralized storage network IPFS. Furthermore, AI agents can delegate computing resources from decentralized computing networks (such as Akash) to perform specific tasks and even expand their execution scope.
Preventing AI Privacy Violations
However, in this development process, privacy and data protection issues have become particularly important. Given that training high-performance machine learning models requires a large amount of data, it can be safely assumed that any public data will enter machine learning models, which can use this data to predict individual behavior. Especially in the financial sector, building machine learning models may lead to the invasion of users' financial privacy. To address this issue, some privacy protection technologies such as zCash, Aztec payments, and private DeFi protocols like Penumbra and Aleo can be used to ensure the privacy of users while conducting transactions and data analysis, achieving a balance between financial transactions and machine learning model development.
Conclusion
We believe that Web3 and AI are culturally and technically compatible. Unlike the resistance to robots in Web2, Web3, with its permissionless programmable nature, creates opportunities for the flourishing development of artificial intelligence.
From a broader perspective, if we consider blockchain as a network, AI is expected to play a dominant role at the edge of this network. This view applies to various consumer applications, from social media to games. So far, the edge of the Web3 network has been primarily composed of humans. Humans initiate and sign transactions, or take actions through pre-set strategies using robots on their behalf.
Over time, we can foresee an increasing number of AI assistants at the network edge. These AI assistants will interact with humans and each other through smart contracts. We believe that these interactions will bring about new consumer and user experiences, potentially sparking innovative application scenarios.
The permissionless nature of Web3 gives AI greater freedom to integrate more closely with blockchain and distributed networks. This is expected to promote innovation, expand application areas, and create more personalized and intelligent experiences for users. However, close attention needs to be paid to privacy, security, and ethical issues to ensure that the development of AI does not have a negative impact on users, but truly achieves the harmonious coexistence of technology and culture.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。