The next major opportunity in cryptography and AI is not another speculative token, but rather the infrastructure that can truly drive the development of AI.
Author: Evan ⨀
Translated by: Deep Tide TechFlow
The intersection of cryptography and AI is still in a very early stage. Although countless intelligent agents and tokens have emerged in the market, most projects seem to be just digital games, with various teams trying to "shoot" as much as possible.
While AI is the technological revolution of our generation, its combination with cryptography is more viewed as a liquidity tool for early access to the AI market.
Therefore, in this intersection, we have witnessed multiple cycles, most of which narratives have experienced a "roller coaster" rise and fall.
How to break the hype cycle?
So, where does the next major opportunity in cryptography and AI come from? What kind of applications or infrastructure can truly create value and find market fit?
This article will attempt to explore the main points of interest in this field through the following framework:
How can AI help the cryptography industry?
How can the cryptography industry feed back into AI?
Particularly for the second point—opportunities in decentralized AI—I am especially interested and will introduce some exciting projects:
1. How can AI assist the cryptography industry?
Here is a more comprehensive ecological map provided by CV:
https://x.com/cbventures/status/1923401975766355982/photo/1
Although there are still many vertical fields in consumer AI, intelligent agent frameworks, and launch platforms, AI has already impacted the cryptography experience in the following three main areas:
1. Developer Tools
Similar to Web2, AI is accelerating the development of cryptography projects through no-code and low-code platforms. Many of these applications aim to achieve similar goals as in traditional fields, such as Lovable.dev.
Teams like @poofnew and @tryoharaAI are helping non-technical developers quickly launch and iterate without needing to deeply understand smart contract knowledge. This not only shortens the time to market for cryptography projects but also lowers the entry barrier for market understanders and creatives, even if they do not have a technical background.
Additionally, other parts of the developer experience have also been optimized, such as smart contract testing and security: @AIWayfinder, @octane_security
2. User Experience
Although the cryptography field has made significant progress in onboarding processes and wallet experiences (such as Bridge, Sphere Pay, Turnkey, Privy), the core cryptography user experience (UX) has not undergone a qualitative change. Users still need to manually navigate complex blockchain explorers and execute multi-step transactions.
AI intelligent agents are changing this status quo, becoming a new layer of interaction:
Search and Discovery: Teams are racing to develop tools similar to "blockchain version of Perplexity." These chat-based natural language interfaces allow users to easily find market information (alpha), understand smart contracts, and analyze on-chain behavior without delving into raw transaction data.
A greater opportunity lies in the fact that intelligent agents can become the entry point for users to discover new projects, yield opportunities, and tokens. Similar to how Kaito helps projects gain more attention on its launch platform, agents can understand user behavior and proactively present the content users need. This not only creates sustainable business models but may also achieve profitability through revenue sharing or affiliate fees.
Intent-based Operations: Users do not need to click through multiple interfaces; they can simply express their intent (e.g., "convert $1000 of ETH into the highest-yielding stablecoin position"), and the agent can automatically execute complex multi-step transactions.
Error Prevention: AI can also prevent common mistakes, such as entering incorrect transaction amounts, purchasing scam tokens, or approving malicious contracts.
More about Hey Anon how to achieve DeFAI automation information:
3. Trading Tools and DeFi Automation
Currently, many teams are racing to develop intelligent agents to help users obtain smarter trading signals, trade on behalf of users, or optimize and manage strategies.
Yield Optimization
Agents can automatically shift funds between lending protocols, decentralized exchanges (DEX), and farming opportunities based on interest rate changes and risk conditions.
Trade Execution
AI can execute strategies superior to manual trading by processing market data faster, managing emotions, and following preset frameworks.
Portfolio Management
Agents can rebalance portfolios, manage risk exposure, and capture arbitrage opportunities across different chains and protocols.
If an agent can truly and consistently manage funds better than humans, it would be a significant upgrade over existing DeFi AI agents. Current DeFi AI mainly helps users execute established intents, while this would move towards fully automated fund management. However, the user acceptance of this shift is similar to the promotion process of electric vehicles, with a significant trust gap remaining before large-scale validation. But if successful, this technology has the potential to capture the largest value in the field.
Who are the winners in this field?
While some standalone applications may have an advantage in distribution, it is more likely that existing protocols will directly integrate AI technology:
DEXs (Decentralized Exchanges): Achieving smarter routing and fraud protection.
Lending Protocols: Automatically optimizing yields based on user risk conditions and repaying loans when loan health factors fall below certain standards, reducing liquidation risks.
Wallets: Evolving into AI assistants that understand user intent.
Trading Platforms: Providing AI-assisted tools to help users stick to their trading strategies.
Endgame Outlook
The interactive interface in the cryptography field will evolve to combine conversational AI, capable of understanding users' financial goals and executing those goals more efficiently than users themselves.
2. Cryptography Assisting AI: The Future of Decentralized AI
In my view, the potential of cryptography to assist AI far exceeds the impact of AI on cryptography. Teams engaged in decentralized AI are exploring some of the most fundamental and practical questions about the future of AI:
Can cutting-edge models be developed without relying on large capital expenditures from centralized tech giants?
Is it possible to coordinate globally distributed computing resources to efficiently train models or generate data?
What happens if the most powerful technology of humanity is controlled by only a few companies?
I highly recommend reading @yb_effect 's article on decentralized AI (DeAI) for an in-depth understanding of this field.
Just from the tip of the iceberg, the next wave at the intersection of cryptography and AI may come from academic AI teams that prioritize research. These teams primarily stem from the open-source AI community, and they have a profound understanding of the practical significance and philosophical value of decentralized AI, believing it to be the best way to expand AI.
What are the current challenges facing AI?
In 2017, the landmark paper "Attention Is All You Need" proposed the Transformer architecture, which solved key problems in deep learning that had persisted for decades. Since the promotion of ChatGPT in 2019, the Transformer architecture has become the foundation of most large language models (LLMs) and has triggered a wave of competition in computing power.
Since then, the computational power required for AI training has grown fourfold each year. This has led to a high degree of centralization in AI development, as pre-training relies on more powerful GPUs, which are predominantly controlled by the largest tech giants.
- From an ideological perspective, centralized AI is problematic because the most powerful tools of humanity can be controlled or withdrawn by their funders at any time. Therefore, even if open-source teams cannot directly compete with the pace of centralized labs, it is crucial to attempt to challenge this situation.
Cryptography provides the economic coordination foundation for building open models. However, before achieving this goal, we need to answer a question: what practical problems can decentralized AI solve beyond fulfilling ideals? Why is it so important for people to collaborate?
Fortunately, teams dedicated to this field are very pragmatic. Open source represents the core idea of technological expansion: through small-scale collaboration, each team optimizes its local maxima and gradually builds upon this foundation, ultimately achieving a global maximum faster than centralized approaches constrained by their own scale and institutional inertia.
At the same time, especially in the field of AI, open source is also a necessary condition for creating intelligence—this intelligence is not moralized but can adapt to the different roles and personalities that individuals assign to it.
In practical terms, open source may open the door to innovative solutions for some very real infrastructure limitations.
The Current Shortage of Computational Resources
Training AI models already requires vast energy infrastructure. Several projects are currently building data centers on the scale of 1 to 5 gigawatts. However, the continuous expansion of cutting-edge models will require more energy than a single data center can provide, potentially reaching levels comparable to the energy consumption of an entire city. The issue is not only energy output but also the physical limitations of a single data center.
Even beyond the pre-training phase of these cutting-edge models, the cost of the inference phase will significantly increase due to the emergence of new inference models and DeepSeek. As the team from @fortytwonetwork stated:
“Unlike traditional large language models (LLMs), inference models prioritize generating smarter responses by allocating more processing time. However, this shift comes with trade-offs: the same computational resources can handle fewer requests. To achieve these significant improvements, models require more 'thinking' time, further exacerbating the scarcity of computational resources.
The shortage of computational resources has become very apparent. For example, OpenAI limits API calls to 10,000 per minute, which effectively restricts AI applications to serve only about 3,000 users simultaneously. Even ambitious projects like Stargate—a $500 billion AI infrastructure plan recently announced by President Trump—may only temporarily alleviate this issue.
According to Jevons’ Paradox, improvements in efficiency often lead to increased resource consumption as demand rises. As AI models become more powerful and efficient, computational demand may surge due to new use cases and broader adoption.”
So where does cryptography come in? How can blockchain meaningfully assist in the search and development of AI?
Cryptographic technology offers a fundamentally different approach: globally distributed + decentralized training with economic coordination. Instead of building new data centers, it is better to utilize the millions of existing GPUs—including gaming devices, crypto mining equipment, and enterprise servers—that are mostly idle. Similarly, blockchain can achieve decentralized inference by leveraging idle computational resources on consumer devices.
One major challenge facing distributed training is latency. Beyond the cryptographic elements, teams like Prime Intellect and Nous are researching technological breakthroughs to reduce GPU communication demands:
DiLoCo (Prime Intellect): Prime Intellect's implementation reduces communication demands by 500 times, making cross-continental training possible and achieving 90-95% computational utilization.
DisTrO/DeMo (Nous Research): Nous Research's family of optimizers achieves an 857-fold reduction in communication demands through discrete cosine transform compression technology.
However, traditional coordination mechanisms cannot address the inherent trust challenges in decentralized AI training, while the inherent characteristics of blockchain may find product-market fit (PMF) here:
Verification and Fault Tolerance: Decentralized training faces challenges from participants submitting malicious or erroneous computations. Cryptographic technology provides cryptographic verification schemes (such as Prime Intellect's TOPLOC) and economic penalty mechanisms to deter bad behavior.
Permissionless Participation: Unlike traditional distributed computing projects that require approval processes, cryptographic technology allows for true permissionless contributions. Anyone with idle computational resources can join immediately and start earning, maximizing the available resource pool.
Economic Incentive Alignment: Blockchain-based incentive mechanisms align the interests of individual GPU owners with collective training goals, making previously idle computational resources economically productive.
In light of this, how do teams in the decentralized AI stack address the scaling issues of AI and utilize blockchain? What are the proof points?
Prime Intellect: Distributed and Decentralized Training
DiLoCo: Reduced communication demands by 500 times, enabling cross-continental training.
PCCL: Handles dynamic member joining, node failures, and achieves cross-continental communication speeds of 45 Gbit/s.
Currently training a 32 billion parameter model through globally distributed work nodes.
Achieved 90-95% computational utilization in production environments.
Achievements: Successfully trained INTELLECT-1 (10 billion parameters) and INTELLECT-2 (32 billion parameters), enabling large-scale model training across continents.
Nous Research: Decentralized Training and Communication Optimization
DisTrO/DeMo: Achieved an 857-fold reduction in communication demands through discrete cosine transform technology.
Psyche Network: Utilizes blockchain coordination mechanisms to provide fault tolerance and incentive mechanisms to activate computational resources.
Completed one of the largest pre-trainings on the internet, training Consilience (40 billion parameters).
Pluralis: Protocol Learning and Model Parallelism
Pluralis adopts a different approach from traditional open-source AI, called Protocol Learning. Unlike the data-parallel methods used by other decentralized training projects (such as Prime Intellect and Nous), Pluralis believes that data parallelism has economic flaws, and merely pooling computational resources is insufficient to meet the demands of training cutting-edge models. For example, Llama3 (400 billion parameters) requires 16,000 80GB H100 GPUs for training.
Source: Link
The core idea of Protocol Learning is to introduce a real value-capture mechanism for model trainers, thereby pooling the computational resources needed for large-scale training. This mechanism is achieved by allocating partial model ownership proportional to training contributions. In this architecture, neural networks are trained collaboratively, but the complete set of weights can never be extracted by any single participant (referred to as Protocol Models). In this setup, any participant wishing to obtain the complete model weights will find that the computational cost exceeds that of retraining the model.
The specific operation of Protocol Learning is as follows:
Model Fragmentation: Each participant only holds a fragment (shard) of the model, rather than the complete weights.
Collaborative Training: The training process requires participants to exchange activation values, but no one sees the complete model.
Inference Credentials: Inference requires credentials, which are allocated based on participants' training contributions. In this way, contributors can earn from the actual use of the model.
The significance of Protocol Learning lies in transforming models into an economic resource or commodity, allowing them to be fully financialized. In this way, Protocol Learning aims to achieve the computational scale necessary to support truly competitive training tasks. Pluralis combines the sustainability of closed-source development (such as the stable revenue from closed-source model releases) with the advantages of open-source collaboration, providing new possibilities for the development of decentralized AI.
Fortytwo: Decentralized Collective Inference
Source: Link
**While other teams focus on the challenges of distributed and decentralized training, Fortytwo focuses on distributed inference, addressing the scarcity of computational resources in the inference phase through **Swarm Intelligence.
Fortytwo addresses the increasingly severe computational scarcity surrounding inference. To leverage the idle computational power on consumer-grade hardware (such as MacBook Airs with M2 chips), Fortytwo connects specialized small language models.
Fortytwo networks multiple small language models, with these nodes collaborating to evaluate each other's contributions, amplifying network efficiency through peer-to-peer evaluations. The final generated responses are based on the most valuable contributions in the network, supporting inference efficiency.
Interestingly, Fortytwo's inference network approach can complement distributed/decentralized training projects. Imagine a future scenario where small language models (SLMs) running on Fortytwo nodes are precisely the models trained by Prime Intellect, Nous, or Pluralis. These distributed training projects work together to create open-source foundation models, which are then fine-tuned for specific domains and ultimately complete inference tasks through Fortytwo's network coordination.
Conclusion
The next significant opportunity at the intersection of cryptography and AI is not another speculative token, but rather the infrastructure that can truly drive AI development. Currently, the scaling bottlenecks faced by centralized AI correspond precisely to the core advantages of the cryptographic field in global resource coordination and economic incentive alignment.
Decentralized AI opens up a parallel universe that not only expands the possibilities of AI architectures but also explores more potential technological boundaries in a context where experimental freedom is combined with practical resources.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。