When Trump invests trillions in AI, who is providing trustworthy data for AI?
When Trump invests trillions in AI, it may seem like a competition among models, chips, and data centers, but it raises deeper questions about how the data that AI models rely on is verified, whether it is traceable, whether the training process is a black box and the inference process can be audited, and whether models can collaborate or are doomed to fight alone.
In simpler terms, when we obtain information from AI, who can ensure that the information provided by AI is correct? Data pollution is no longer just a buzzword; a certain AI application that claimed to be a ChatGPT killer has already fallen deeply into an environment of data pollution. When the data sources are all incorrect, how can the answers provided be correct?
Is current AI intelligent? Perhaps, but even the smartest AI requires model training. However, we cannot know what data was used for model training, we cannot verify whether the GPU has truly completed an inference process, and we cannot establish mutual trust logic among multiple models.
To truly advance AI to the next generation, it may be necessary to address these three issues simultaneously:
Training data must be trustworthy and verifiable.
The inference process must be auditable by third-party models.
Models must be able to coordinate computing power, exchange tasks, and share results without the need for platform mediation.
This cannot be solved by a single model, a single API, or a single GPU platform; it requires a system built specifically for AI. This system should be able to store data permanently at low cost, allowing the data itself to have the rights to be reviewed and audited, enable verification of inferences between models, and support models in autonomously discovering computing power, coordinating tasks, and auditing every step of execution under specific conditions.
This is difficult to achieve on centralized platforms, so is it possible to implement it on decentralized platforms, and why should we use a decentralized approach?
I believe only blockchain can truly integrate "data storage, data execution, and data verification" within the same underlying network. This is one of the greatest attractions of blockchain: immutability and transparency. However, the problem is that not every chain is suitable for being the underlying layer for AI.
If it’s purely about storage, the IPFS protocol already exists, but simple storage is not enough; it also needs to allow smart contracts to directly call data, audit inference results, and even coordinate GPU resources to complete computational tasks. These features, let alone IPFS, are not achievable by most L1 or AI applications at this time.
If there is any connection, it might be that @irys_xyz has some opportunities. Irys is not a traditional storage chain but is preparing to build a data execution network for AI, treating data as programmable assets. Models can read data on-chain, verify inferences, call computing power, and implement pricing, authorization, profit-sharing, and verification through smart contracts.
Of course, Irys still has some immature aspects, but this developmental direction seems to be correct. Moreover, whether it is centralized AI or decentralized AI, if the data source is untrustworthy, then all computing power is built on sand; no matter how strong the model is, it is merely a reflection of the moon in water or a flower in a mirror.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。