Gradient: When AI No Longer Belongs to Centralized Giants!
The video below is quite interesting:
@alexmirran and the team got @AlibabaQwen 235B running in just a few minutes.
It allows us to build local AI clusters on Mac and PC, hosting our own models and applications without compromising performance!
How is this achieved?
In fact, over the past few years, AI has become increasingly powerful, yet also more "privatized": models are closed, interfaces are expensive, and data is locked down.
However, on a more obscure path, a group of people is trying to restart this narrative, believing that AI should not be a product, but a public resource.
Gradient is building a decentralized AI infrastructure.
At its core are three protocols:
1️⃣ Echo: A distributed reinforcement learning engine that allows the reinforcement learning process to occur in a decentralized environment, making model post-training no longer a monopoly of a few companies;
2️⃣ Parallax: A distributed inference engine that enables everyone to run their own AI and join the inference network with others, serving broader needs.
3️⃣ Lattica: Responsible for allowing models and data to flow freely around the globe;
It sounds like science fiction, but it is actually a very realistic vision: Open Intelligence Stack.
An open, permissionless intelligent system: not reliant on giant clouds, not locked behind APIs, but running on public networks.
Just like the way mentioned at the beginning!
I looked through their research documents: they have already released the distributed inference engine Parallax, the reinforcement learning framework Echo, and the enterprise-level solution Gradient Cloud.
Moreover, top AI labs like Kimi, Alibaba Tongyi Qianwen, SGLang, and Minimax have already shared their support, feeling like they are combining the "open spirit of the crypto world" with the "technical rigor of the AI world."
That feeling is rare: it’s not about the "AI narrative," nor is it "hype." From GPU/MAC scheduling to distributed KV cache and cross-network heterogeneous inference optimization, each paper is genuine research.
This calm and long-term approach reminds me of early DeepSeek: not noisy, not fast, but steadily laying out the underlying architecture.
In June 2025, Gradient completed a $10 million financing round led by Pantera and Multicoin. Pantera established a DeAI track in its portfolio for the first time, and Gradient is their only bet.
Founder Eric comes from Sequoia, and the team has nearly 40 members: ACM gold medalists, Yao Class, and PhD graduates from Columbia, with a strong technical background.
You can sense a rare quality:
Both the resilience of engineers and the aesthetics of researchers.
Conclusion—
In an era where giants define intelligence, Gradient attempts to reopen intelligence. Perhaps in the end, it may not be the one to succeed;
But its existence proves that the path of "open intelligence" has already been illuminated.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。