@ionet is a decentralized computing power network built on Solana, providing a platform for GPU-based machine learning training. However, the actual number of available GPUs is only 320, and it faces challenges such as low cost-effectiveness and poor hardware inference performance.
Author: @rargulati, MartinShkreli
Translation: Plain Blockchain
@ionet is a decentralized computing power network built on Solana, belonging to the Depin and AI sectors, and has received funding from Mult1C0in Capital and Moonhill Capital, with the funding amount undisclosed.
io.net is a decentralized cloud platform for GPU-based machine learning training built on Solana, providing instant, permissionless access to a global network of GPUs and CPUs. The platform has 25,000 nodes and utilizes revolutionary technology to aggregate GPU cloud clusters, saving up to 90% of computing costs for large-scale AI startups.
Currently built on Solana, it belongs to the currently popular Depin and AI sectors. Today, let's take a look at the analysis of its GPUs and existing issues by the two individuals on X:
@ionet, how many GPUs (Graphics Processing Units, chips used for graphics processing) do you have?
On X, @MartinShkreli analyzed four answers:
1) 7648 (attempted during deployment)
2) 11107 (manually calculated from their resource manager)
3) 69415 (an inexplicable number, unchanged?)
4) 564306 (there is no support, transparency, or substantive information here. Even CoreWeave or AWS doesn't have this many)
The actual answer is believed to be 320.
Why 320?
Let's take a look at the resource manager page together. All the GPUs are "free," but you still can't rent one. If they are free, why can't you rent them? People want to be rewarded, right?
In reality, you can only rent 320 of them.
If you can't rent them, then they don't actually exist. Even if you could rent them, it would increase…
@rargulati stated that Martin's questioning of this matter is completely correct. Decentralized AI protocols face the following issues:
1) There is no cost-effective and time-efficient way to conduct useful online training on highly distributed general-purpose hardware architecture. This requires a major breakthrough that I am not currently aware of. This is why FANG spends more money on purchasing expensive hardware, network connections, data center maintenance, etc., than all the liquidity of cryptocurrencies.
2) Conducting inference on general-purpose hardware sounds like a good use case, but hardware and software developments have progressed so rapidly that decentralized general-purpose approaches perform poorly in most critical use cases. Reference can be made to the latest delays from OpenAI and the growth of Groq.
3) Conducting inference from correctly routed requests, co-located with tightly coupled GPU clusters, and utilizing decentralized cryptocurrencies to lower capital costs to compete with AWS and incentivize enthusiasts to participate. It sounds like a good idea, but due to numerous suppliers, the liquidity of the GPU spot market is dispersed, and no one has aggregated enough supply to provide to those running real businesses.
4) The software routing algorithm must be very good, otherwise there are many operational issues with consumer operators' general-purpose hardware. Forget about network breakthroughs and congestion control, if someone decides to play a game or use any content using webgl, you might experience a service interruption from an operator. Unpredictable supply-side disruptions will trouble operators and bring uncertainty to demand-side requesters.
These are all tricky problems that will take a very, very long time to solve. All the bids are just a joke.
Source: https://twitter.com/rargulati/status/1784309894880940262
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。