"NVIDIA Accelerates AI Chip Update Pace, Introducing New Architecture Annually"

2024-05-23

NVIDIA has achieved a staggering $14 billion in profit in just one quarter with its AI chips, and the company is now set to accelerate its progress. NVIDIA CEO Jensen Huang has explicitly stated that the company will design new chips every year, instead of the previous biennial cycle.


"I can announce that, following Blackwell, we will launch another chip. We have now entered a yearly update rhythm," Huang said during the company's Q1 FY2025 earnings conference call.

So far, NVIDIA has released a new architecture approximately every two years - for example, Ampere in 2020, Hopper in 2022, and Blackwell in 2024.

(The industry's star products, the H100 AI chip based on the Hopper architecture and the B200 based on the Blackwell architecture, also use these same architectures for gaming and creator GPUs.)

However, analyst Ming-Chi Kuo stated in a report earlier this month that the next-generation architecture, "Rubin," is expected to be released in 2025, which means we may see the R100 AI GPU as early as next year. Huang's comments suggest that this report is likely accurate.

Huang stated that NVIDIA will accelerate the development of all its other chips to match this new update rhythm. "We will drive them forward at an extremely fast pace," he added.

"New CPUs, new GPUs, new network NICs, new switches... a large number of new chips are about to be unveiled," he further stated.

During the conference call, when an analyst asked how the latest Blackwell GPU would grow while Hopper GPU sales remained strong, Huang explained that NVIDIA's new generation of AI GPUs is backward compatible in terms of electrical and mechanical aspects, and runs the same software. He said that customers will be able to "easily transition from H100 to H200 and then to B100" in their existing data centers.

Huang also shared some of his sales strategies during the conference call to explain the astonishing demand for NVIDIA's AI GPUs:

"We expect demand to exceed supply for a period of time during the transition to H200 and Blackwell. Major companies are eager to get their infrastructure up and running. The reason is that they are saving and making money through AI and want to achieve this as soon as possible," he said.

He also presented a humorous FOMO (fear of missing out) perspective:

"The next company to reach the next important milestone will announce a breakthrough AI technology, while the company that follows can only announce a 0.3% improvement in technology. Do you want to be the company that launches breakthrough AI or the company that only launches a 0.3% improvement?"

NVIDIA's CFO also mentioned that the automotive industry will be its "largest vertical area within data centers this year." He pointed out that Tesla has purchased 35,000 H100 GPUs to train its "fully autonomous driving" system, and "consumer internet companies" like Meta will continue to be "strong growth verticals."

Some customers have already purchased or plan to purchase over 100,000 NVIDIA H100 GPUs - Meta plans to deploy over 350,000 H100 GPUs by the end of the year.