Elon Musk’s company xAI has started training the world’s most powerful AI cluster based on 100,000 Nvidia H100 chips located in Memphis.
Elon Musk announced on his platform X that his company xAI, which develops large language models (LLMs) Grok, has started training the most powerful AI cluster in the world. The cluster is located in Memphis and uses liquid-cooled Nvidia H100 chips, which cost about $50,000 each.
These chips are connected using Remote Direct Memory Access (RDMA), which provides low latency when transferring data between nodes without overloading the processor. Musk predicts that by December of this year, this cluster will become the most powerful in the world by all indicators.
Earlier this year, Musk announced plans to build a supercomputer that he also calls the “Gigafactory of Compute.” According to the businessman, the supercomputer will start working in the fall of 2025. Currently, Grok, an AI assistant from xAI, is available to paid subscribers on the X platform. The company also announced the release of Grok 2 next month and a third model by the end of the year. In May, xAI received a $6 billion investment from prominent Silicon Valley investors to develop infrastructure and accelerate research and development of future technologies.