Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy
BTCBTC
💲66929.45
-
0.08%
ETHETH
💲2050.44
-
0.59%
SOLSOL
💲79.97
+
0.18%
WLDWLD
💲0.2609
-
3.9%
USDCUSDC
💲1.00
+
0.01%
XRPXRP
💲1.31
-
0.76%

头雁
头雁|7月 10, 2025 14:18
I thought @ 0G_labs was just a plate, because I looked at the official website for a long time and didn't know what to say. I saw the progress of decentralized training in the following tweet today, which is the same as the technical directions of several decentralized training techniques I previously shared. It's just that their model should have a maximum scale of 100 billion, there are papers available https://arxiv.org/abs/2506.21263 No code. Deepmind's @ Ar-Douillard also reposted this paper. The performance described in the paper: -107B models can be trained on a 1Gbps slow network -Compared to vanilla AllReduce, the speed has increased by 357 times, -Superior to OpenDiLoCo (OOM>20 ° B) and CocktailSGD (over compression reduces convergence). -357 times faster than AllReduce on Qwen1.5-107B- -Can still maintain convergence at a compression rate of up to 1000 times The first major achievement in the field of decentralized training was a 2023 paper by the DeepMind team of @ Ar-Douillard https://arxiv.org/abs/2311.08105 , https://arxiv.org/abs/2502.12996 They are mainly training models for heterogeneous low-speed Internet GPUs. At first, the models were relatively small. But it should have inspired many people. For example, the @ PrimeInterlect team and the @ gensynai team (initially deployed on the @ polkadot ecosystem, I don't know why they chose to deploy on ETH's L2 later). They mainly focus on the asynchronous nature of RL reinforcement learning for model post training. Among them, @ PrimeInterlect has open-source code implementation https://arxiv.org/abs/2407.07852 https://www.primeintellect.ai/blog/opendiloco https://((github.com))/PrimeIntellect-ai/OpenDiLoCo There is also @ tplr.ai (just discovered today) in this field, which was done on Bittensor's subnet (Bittensor used to feel like a plate, but now it seems that there is still an ecosystem doing things) https://www.tplr.ai/research https://www.tplr.ai/papers/templar_paper.pdf http://arxiv.org/abs/2505.23725 https://arxiv.org/abs/2505.23725 https://((github.com))/tplr-ai/CCLoco https://templarresearch. (substack.com)/p/ccloco-scaling-up-top-k-error-feedback There is also a team working in this direction @ NousResearch The team researching this direction can pay attention to it
Mentioned
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Timeline

8月 05, 14:16【Google DeepMind releases Genie 3 interactive world model】
7月 17, 03:03【Gemini 2.5 Pro model launches on Google search for AI mode】
7月 02, 02:37【AI industry changes to local small model and edge computing】
6月 15, 02:11【Google releases its most powerful AI: alphaEvolve】
6月 15, 02:10【Google releases its most powerful AI: alphaEvolve】

HotFlash

|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink

Hot Reads