
Elon Musk|Jul 22, 2025 16:54
230k GPUs, including 30k GB200s, are operational for training Grok @xAI in a single supercluster called Colossus 1 (inference is done by our cloud providers).
At Colossus 2, the first batch of 550k GB200s & GB300s, also for training, start going online in a few weeks.
As Jensen Huang has stated, @xAI is unmatched in speed. It’s not even close.(Elon Musk)
Share To
Timeline
HotFlash
APP
X
Telegram
CopyLink