Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

Gradients: The decentralized AI training infrastructure of the Bittensor ecosystem.

CN
CoinW研究院
Follow
2 hours ago
AI summarizes in 5 seconds.

CoinW Research Institute

Abstract

Gradients is a decentralized AI training subnet built on Bittensor (SN56), which focuses on transforming complex technical processes of model training into a market-driven collaborative network process through mechanisms such as "task publishing, miner competition, and validation screening." Architecturally, it combines AutoML and distributed computing power, forming a training market centered on incentive mechanisms, which lowers the entry barrier for AI usage and enhances computing power utilization efficiency. From an ecological and data performance perspective, Gradients has completed the construction of its basic network, but currently, the incentive weights and capital inflow are relatively limited. Gradients fills the training infrastructure gap in the TAO ecosystem and explores a new paradigm of "market-driven AI optimization," potentially developing into an essential gateway for decentralized AI training in the long term.

1. Starting with Web2 AutoML: The Current Status and Limitations of AI Training

1.1 What is AutoML

In traditional understanding, training an AI model is a high-threshold task that requires engineers to handle data, select models, repeatedly tune parameters, and evaluate outcomes, making the entire process complex and time-consuming. The emergence of AutoML (automated machine learning) essentially "packages automation" for these cumbersome steps. It can be understood as a "tool for automating model creation": users only need to provide data and specify the goals they wish to achieve, such as classification, prediction, or recognition, while the remaining processes, including model selection, parameter tuning, and optimization training, are completed automatically by the system. This transforms AI from a tool available to only a few specialized engineers into a capability that can be utilized by regular developers and even businesses, marking an important step towards the popularization of AI.

1.2 Core Limitations of Traditional AutoML

Currently, mainstream AutoML implementations are concentrated on cloud vendor platforms, such as Google Vertex AI and AWS SageMaker, which provide "AI training as a service." Although Web2 AutoML significantly lowers the entry barrier for AI usage, its underlying model still has obvious limitations. The first is the centralization issue: computing power, pricing, and rules are all controlled by the platform, leading to strong dependency on a single service provider and a lack of bargaining power for users. Secondly, costs are high and opaque; the GPU resources that AI training relies on are primarily concentrated in the hands of cloud vendors, and the pricing mechanism lacks market competition. More critically, optimization efficiency has upper limits. Traditional AutoML is fundamentally still "a system helping you find optimal solutions," and no matter how complex this system is, it essentially belongs to a single technical path of optimization. Its exploratory space is limited, making it challenging to simultaneously try several entirely different approaches. Thus, the current Web2 AI training operates within a "closed system," where model training, optimization, and resource scheduling occur within an environment controlled by a single platform. While this model is efficient, its boundaries are gradually manifesting with growing demand.

2. Gradients: Restructuring AI Training through "Networks"

2.1 What is Gradients: A Decentralized AutoML Platform

In the previous chapter, we mentioned that the core issue of traditional Web2 AutoML lies in its "closed system," where model training is dependent on platforms, optimization paths are limited, and resource flow is constrained. Gradients is precisely a reconstruction of this model. It originates from a decentralized engineer community initiated by WanderingWeights, built on the Bittensor network, and operates as an AI training subnet within Subnet 56. Unlike traditional platforms, it does not provide centralized services but breaks down the training process and entrusts it to an open network. Users only need to define task goals, such as model types and data, while the remaining processes, including training execution, parameter optimization, and result screening, are completed automatically by the network. In this model, AI training is abstracted from complex engineering processes into a simple process of "submitting requirements and obtaining results," bringing it closer to a universal capability rather than a technically demanding professional task.

2.2 From Closed Systems to Open Collaboration: What Problems Does Gradients Solve?

The core change in Gradients is to transform the originally closed training process within a single platform into an open collaborative network process. Training tasks are no longer completed by a single system but distributed to multiple participants who try in parallel, with the best results selected through a unified evaluation mechanism. This structure first reduces dependency on centralized service providers, establishing training based on distributed computing power; at the same time, scattered GPU resources are integrated into the same network, forming a resource allocation model closer to market dynamics through competition. More importantly, model optimization is no longer limited to a single path but continually approaches optimal solutions through parallel exploration of various methods, thereby improving overall optimization limits.

2.3 Essential Change: From Tool to "Training Market"

In traditional AutoML, the platform functions more like a tool, utilizing internal algorithms to help users find optimal solutions. In Gradients, this process resembles a continuously operating "market": users publish demands, different participants compete around the same task, and results are filtered through an evaluation mechanism. Thus, model performance no longer depends on a single system's capability but derives from continuous competition and iteration among multiple participants. AutoML shifts from a relatively closed technical optimization problem to a dynamic process driven by incentives, enabling optimization capabilities to expand as the number of participants increases. This transformation gives AI training self-evolution characteristics similar to market dynamics.

2.4 Role in the TAO Ecosystem: AI Training Infrastructure Layer

In the subnet system of Bittensor, different Subnets undertake different functions such as inference, data processing, and training, with Gradients positioned as the training layer. It is responsible for converting distributed computing power into actual model outputs and ensures that these resources can be continuously scheduled and optimized through task distribution and evaluation mechanisms. Simultaneously, it connects computing power supply with model demand, transforming training from a mere resource consumption process into an organized and optimized network collaboration process. In this system, Gradients acts more like a central link, converting distributed resources into usable AI capabilities and supporting the development of upper-layer applications.

3. Core Architecture: How AI Training is Achieved in the Network

In the previous chapter, we mentioned that Gradients transforms AI training from being "completed within a platform" to being "completed through network collaboration." So, how does this network operate specifically? The core of this chapter is to break down this process more intuitively.

3.1 Distributed Training: How a Task is "Completed by Multiple People"

You can imagine Gradients as a continuously running "training collaboration network." When a user submits a training task, this task is not assigned to a single system, but is simultaneously distributed to multiple participants in the network. These participants will attempt different training methods based on the same data and goals and submit results within a specified time frame. Subsequently, the system will conduct a unified evaluation of these results to select the best-performing solutions. Ultimately, the results that perform better will be rewarded, while other solutions will be eliminated. From the user's perspective, this process only requires initiating a single task, equivalent to simultaneously "calling" various optimization strategies and automatically selecting the optimal solution. The key to this method lies not in the strength of individual nodes, but in continuously approaching optimization through parallel attempts by multiple parties and automatic screening.

In this network, there are three main types of participants: users, miners, and validators. Users are responsible for proposing training demands; miners provide computing power and try various training methods; validators evaluate the results and select the best models. This division of labor enables the training process to run continuously, continually honing better solutions. Overall, it forms a collaborative network driven by "demand, supply, and evaluation."

3.2 Market-Driven AutoML

From the mechanism breakdown provided earlier, it can be seen that Gradients is not merely transferring AutoML onto the blockchain; instead, it introduces multi-party participation and incentive mechanisms that alter the underlying logic of model optimization. Traditional AutoML relies on a single system's search for optimal solutions within limited paths, whereas in Gradients, this process is expanded to the entire network: different participants continually try various methods around the same task, and through unified evaluations, they continuously filter and iterate. This makes model optimization no longer a one-time computation process but a dynamic one capable of repeated evolution. Under this mechanism, better-performing results will yield higher returns, continually attracting participants to optimize strategies and pushing overall performance to improve.

4. Incentive and Competition Mechanisms: How AI Training Forms a "Positive Cycle"

4.1 Incentive Mechanism (TAO Driven): From Training Actions to Revenue Returns

The key to Gradients' long-term operation lies in the incentive mechanism behind it. This relies on the native incentive system provided by Bittensor. Among these, TAO is the native token of the Bittensor network, serving as the "value carrier" within the entire network: on the one hand, it is used to reward participants who provide computing power and model contributions; on the other hand, it participates in subnet weight distribution through staking and other methods, affecting how resources flow between different subnets.

The Bittensor mainnet continually generates new incentive emissions, specifically TAO (currently, a suitable daily amount is approximately 3600 TAO), which are allocated to different subnets according to specific rules. How much each subnet can receive depends on its "performance" within the entire network, such as activity level, contribution quality, and financial support status. For the subnet where Gradients is located, this portion of allocated TAO will be reallocated internally among participants. The core basis for distribution is that those who contribute better models can earn more revenue.

Specifically, miners submit training results while validators are responsible for testing and scoring these results. The system calculates each participant's "contribution weight" based on the scoring conditions and allocates rewards according to this weight. Models that perform better (for example, those with stronger generalization abilities and more stable performance) will receive higher rewards, and validators who provide more accurate scores that better reflect true quality will also gain more incentives. This design directly links "performing better" to "earning more," driving participants to continuously optimize models.

4.2 Competition Among Subnets: Not Just Internal Competition, but External Rankings

In addition to the internal competition within subnets, Gradients also faces "horizontal competition" within the entire Bittensor network. Since TAO distribution is dynamic, different subnets compete for higher weights. Only those subnets that continually produce high-quality results and attract more participants can obtain larger shares of rewards. Therefore, Gradients' incentives depend not only on the internal performance of models but also on its relative competitiveness within the entire ecosystem. The entire system forms a multi-layered cycle: there is competition among models within subnets; and overall performance competition among subnets. Ultimately, computing power investment, model performance, and economic returns are linked together, creating a sustainable positive feedback mechanism.

4.3 Gradients 5.0: From Competition to "Tournament Mechanism"

Based on the early continuous competition, Gradients has further evolved a more structured mechanism, namely "tournament-style training." This can be understood as a periodic competition: each training round will set a time window, and multiple participants will compete around the same task, gradually eliminating through multiple rounds of filtering until the optimal solution is selected. This format emphasizes staged comparisons and concentrated evaluations. An important change is that miners no longer directly submit training results but instead submit "training methods" (code), which are then executed uniformly by validating nodes. This approach enhances fairness by avoiding interference from different computing environments and better protects data and the privacy of the training process. Moreover, winning solutions often become established, forming reusable methods, akin to an accumulating "best practice." In the long run, this mechanism is not only screening the best models but is also constructing an evolving library of training methods.

5. Ecological Status

5.1 Participant Structure: A Collaborative Network of Demand, Supply, and Evaluation

The Gradients ecosystem consists of three core roles: users (demand side), miners (supply side), and validators (evaluation side). Users primarily include AI developers, small to medium enterprises, and Web3 builders. This group often has a certain technical foundation but lacks computing power or complete model training capabilities, making them more inclined to use Gradients for model building at a lower cost. Miners provide GPU computing power and engage in competition for training tasks, with the core motivation being to earn TAO rewards; validators are responsible for evaluating and ranking training results, which is crucial for ensuring model quality and the effective operation of mechanisms.

From a more detailed user profile perspective, the actual user base of Gradients displays a clear "semi-developer" characteristic: they are neither top AI laboratories nor completely non-technical ordinary users, but rather developers and Web3 technology users with a certain engineering capability. This is also reflected in its community structure, which is currently predominantly English-speaking, with core users primarily distributed among developer groups in North America and Europe, while also covering some Southeast Asian miners and global GPU resource providers. Overall, it approaches a technology-driven developer community.

5.2 Current Ecological Operation Status

As of May 12, the price of Gradients' alpha token is approximately 0.0255 TAO, with approximately 4,890 holder addresses, 243 miners, and 12 validators, representing an emission ratio of 1.61%. Simultaneously, TAO accounts for 2.19% of its liquidity pool, while Alpha accounts for 97.81%. In terms of price and the number of holders, Gradients has established a certain user base and attention but is still in the early diffusion stage overall. Comparing with the leading project in the TAO ecosystem, Chutes, the alpha token price on that day was 0.0877 TAO, with 13,409 holder addresses.

Figure 1. Gradients data. Source: https://bittensormarketcap.com/subnets/56

Secondly, there is the Emission incentive mechanism. In the Bittensor system, Emission refers to the real-time distribution weight of that subnet in the total network's new rewards. The Bittensor network will continually generate new TAO, which is allocated to various subnets according to their weight, and currently, Gradients' 1.61% indicates that it only receives a small share of the overall network's new incentives. This metric essentially reflects the market's "voting result" for different subnets through capital flow (such as staking). Therefore, the 1.61% level usually means that current market recognition and capital inflow are relatively limited, while also indicating that there is still potential to enhance weight in the future. In terms of funding structure (liquidity pool), TAO accounts for only 2.19%, while Alpha reaches 97.81%, indicating that external capital inflow remains limited, and currently, more contributions come from internal subnet supply. Prices are sensitive to new capital; once more TAO flows in, it may have more obvious amplification effects.

6. Competitive Landscape and Advantages/Disadvantages

6.1 Industry Positioning: Training Infrastructure for Decentralized AutoML

Gradients is positioned in the subfield of "AI training infrastructure + decentralized AutoML." It seeks to liberate model training from centralized platforms and achieve more efficient resource utilization and model optimization through a networking mechanism. In the Web2 system, this track is relatively mature, with typical representatives including Google Vertex AI and AWS SageMaker. These platforms provide a one-stop model training and deployment service for developers through cloud computing, but their essence remains centralized architecture. In contrast, Gradients' distinction does not lie in offering "more features" but in its different underlying logic: it transforms training from "platform service" to "network collaboration" and selects optimal results through competitive mechanisms, making it a training system closer to market operation.

6.2 Horizontal Comparison: Differences between Web2 and Web3 AutoML

From a broader perspective, the differences between Web2 and Web3 in the direction of AutoML essentially compare two different paradigms. The Web2 model emphasizes efficiency and stability, providing controllable and mature service experiences through centralized resource and engineering optimizations, while the Web3 model emphasizes openness and incentive mechanisms, enabling model optimization to evolve continuously through multi-party participation. Specifically, Web2 AutoML resembles "a powerful tool," where users assign tasks to platforms for the system to internally carry out the optimal solution search; whereas Web3 AutoML represented by Gradients resembles "an open market," where users post demands, different participants offer solutions, and results are filtered through evaluation mechanisms. The immediate impact of this difference is that the former is more stable and controllable but has limited optimization paths, while the latter has a larger exploratory space and higher potential limits, but still has room for improvement in terms of stability and maturity.

6.3 Gradients' Differentiation in Web3

In the current Web3 AI track, most projects remain focused on inference layers or AI Agents, while projects concentrating on "training infrastructure" are relatively scarce. Some projects try to combine computing power networks or data networks to provide training capabilities, but overall, most remain at the resource scheduling or computing power market level. Gradients' differentiation lies in its not only providing computing power matching but also extending upwards to the "model optimization mechanism" itself, introducing evaluation and competition systems that allow the training process to have continuous evolutionary capabilities. This means it not only solves "where does computing power come from" but also addresses "how to use this computing power more efficiently." From a positioning perspective, Gradients is closer to a "training-results-oriented" network rather than merely a computing power market or tool platform, which is a core difference from most Web3 AI projects.

6.4 Core Advantage: Mechanism-Driven Efficiency Enhancement

Overall, Gradients’ advantages primarily manifest in its mechanism design. First, it reduces the entry barrier through task abstraction, allowing users to obtain model results without deeply engaging in complex training processes, thereby expanding the potential user base. Secondly, at the resource level, the introduction of distributed computing power eliminates reliance on a single cloud vendor, theoretically enabling a more flexible cost structure through competition. More importantly, there is a change in its optimization method. By enabling parallel exploration by multiple participants along with filtering mechanisms, Gradients provides a solution distinct from traditional single-path optimization, allowing models to achieve better performance in a shorter timeframe. This "competition-driven optimization" model is its most core advantage.

6.5 Potential Challenges

Model quality may have stability issues. Decentralized training relies on multi-party participation, which could enhance upper limits but may also cause result fluctuations, presenting certain uncertainties compared to centralized systems. Secondly, there are enterprise-level trust issues. For enterprise users, data security and the verifiability of the training process are crucial, and how to ensure data is not misused and results can be audited in a decentralized environment remains a key challenge. Lastly, there is dependence on the Token economy. Gradients' operation heavily relies on its incentive mechanism; if the appeal of TAO revenues declines, it may affect miner participation and overall network activity. Therefore, its long-term sustainability is to some extent dependent on whether the economic model can form a stable positive cycle.

7. Future Outlook: Can Decentralized AutoML Materialize?

From the current stage, Gradients is still in its early phase, and whether it can genuinely succeed in the future hinges on several key factors. The most critical is whether it can continuously attract real training demands rather than just participants driven by incentives; secondly, the quality of models—whether the decentralized approach can stably produce usable or even superior results; and lastly, whether the economic mechanism can form a positive cycle, maintaining long-term balance between computing power supply and returns.

In a broader industry context, AI training is diverging into two paths. One is the Web2 model, led by major tech companies, which continuously strengthens model performance through centralized resources and engineering capabilities, emphasizing stability and maturity; the other is the Web3 path represented by Gradients, which enables more participants to engage in model optimization through open networks and incentive mechanisms, continually enhancing upper limits through competition. The former focuses on "building a stronger system," while the latter is more about "constructing a self-evolving network."

From this perspective, Gradients' exploration represents a new possibility: AI training is no longer just a technical issue but a combination of "computing power + data + market mechanisms." If this model can materialize, it has the potential to become the training entry point for decentralized AI and play a critical infrastructure role within the Bittensor ecosystem. Of course, this direction still requires time for validation, but it has already provided a developmental approach for AutoML that differs from traditional paths.

References

1. Bittensor Documentation: https://docs.learnbittensor.org

2. Gradients website: https://www.gradients.io/

3. Gradients: https://bittensormarketcap.com/subnets/56

4. Gradients X: https://x.com/gradients_ai

5. Taostats: https://taostats.io/subnets/56/chart

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by CoinW研究院

2 hours ago
From subnet competition to network effects: Will Bittensor (TAO) become the BTC of AI?
22 hours ago
CoinW Research Institute Weekly Report (May 4, 2026 - May 10, 2026)
5 days ago
CoinW Research Institute Weekly Report (April 27, 2026 - May 3, 2026)
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarCoinW研究院
2 hours ago
From subnet competition to network effects: Will Bittensor (TAO) become the BTC of AI?
avatar
avatarCoinW研究院
22 hours ago
CoinW Research Institute Weekly Report (May 4, 2026 - May 10, 2026)
avatar
avatarHotcoin
2 days ago
Be alert for the "painting door" replay next week, analysis of key support levels for BTC/ETH/SOL.
avatar
avatarHotcoin
2 days ago
Hotcoin Research | Bitcoin Breaks Through $80,000: Bull Market Restart, or Cycle Stress Test?
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink