Today, during a live broadcast, Musk officially unveiled the artificial intelligence model Grok 3 and disclosed data related to its training costs for the first time. It was revealed that the training process consumed the equivalent of 200,000 NVIDIA GPUs.
The training took place at xAI's newly constructed supercomputing data center, "Colossus," which is considered one of the world's leading AI training facilities. This infrastructure provided strong support for the large-scale training of Grok 3.
Compared to its predecessor, Grok 2, Grok 3 has achieved a significant increase in training scale. While Grok 2 used approximately 20,000 GPUs for training, Grok 3's training volume is ten times greater. This substantial investment in computational resources suggests that Grok 3 may experience a qualitative leap in reasoning ability, comprehension, and content generation.
With these upgrades, industry observers are keenly watching how Grok 3 will demonstrate its enhanced performance in practical applications and how it might reshape the landscape of the artificial intelligence field.