Nvidia CEO Jensen Huang predicts that a billion robotic vehicles will one day roam the roads. While this vision may sound like science fiction, Huang confidently stated, "I am science fiction." He shared this perspective during a conference call with analysts discussing Nvidia's Q4 earnings report for fiscal year 2025. Despite a slight 0.5% drop in Nvidia's stock price to $130.72 per share in after-hours trading, the company has released several positive updates.
Nvidia's Executive Vice President and CFO Colette Kress revealed during the call that data center revenue surged by 93% year-over-year and 16% quarter-over-quarter, driven by the progress of Blackwell and increased Hopper chip sales. She emphasized that Blackwell's sales exceeded expectations, stating, "This is the fastest product ramp in our company’s history, unmatched in both speed and scale." Blackwell is now fully deployed across multiple configurations with growing supply and expanding customer adoption. Data center computing revenue rose 18% sequentially and more than doubled year-over-year as customers expand infrastructure to train next-generation models and unlock advanced AI capabilities.
Kress added that clusters using Blackwell typically start at 100,000 GPUs or more, with various infrastructures already shipping. Post-training and model customization are driving demand for Nvidia's infrastructure and software as developers utilize techniques like fine-tuning, reinforcement learning, and distillation. For instance, Hugging Face has released over 90,000 models derived from the Llama base model. She anticipates that post-training and customization will require significantly more computational power than pre-training. Meanwhile, inference demand is accelerating due to new models like OpenAI o3 and DeepSeek.
Regarding China, Kress noted that sales are expected to increase sequentially, though at the same percentage as Q4, roughly half of what it was before the Biden administration's export controls. She highlighted that Nvidia's inference costs have decreased 200-fold over two years and pointed out that as AI expands beyond the digital realm, Nvidia's infrastructure and software platforms are increasingly adopted for robotics and physical AI development. Additionally, Nvidia's automotive sector revenue is set to grow.
At CES, Nvidia showcased its Cosmo World Foundation model platform, with Uber among the first companies to adopt the technology. From a geographical standpoint, the U.S. shows the greatest potential for data center revenue growth, while countries worldwide are building AI ecosystems, driving demand for computational infrastructure. France and the EU's €200 billion AI investment plans offer new perspectives on global AI infrastructure development in the coming years.
Kress acknowledged that data center sales in China remain far below pre-export control levels but believes shipments will stabilize if regulations remain unchanged. She affirmed, "We will continue to comply with export controls while serving our customers."
In gaming and AI PCs, Kress reported gaming revenue at $2.5 billion, down 22% sequentially and 11% year-over-year. However, annual revenue reached $11.4 billion, up 9% year-over-year, with strong holiday demand. Q4 shipments were affected by supply constraints, but she expects robust sequential growth in Q1 as supplies increase with the launch of the new GeForce RTX 50 series desktop and laptop GPUs designed for gamers, creators, and developers. The RTX 50 series features Blackwell architecture, fifth-gen tensor cores, and fourth-gen RT cores, with DLSS4 software boosting frame rates up to eight times compared to the previous generation.
Automotive revenue hit a record $570 million, up 27% sequentially and 103% year-over-year, with total annual revenue reaching $1.7 billion, a 55% increase. This growth is fueled by autonomous vehicles, including cars and robots. At CES, Nvidia announced that Toyota, the world's largest automaker, will build its next-generation vehicles on Nvidia Orin, running certified Nvidia Drive. Increased chip production led to higher engineering development costs this quarter.
When asked about the blurred line between training and inference, Huang mentioned "multiple scaling laws," including pre-training, post-training with reinforcement learning, and test-time computation or inference scaling. He stressed these methods are still in their infancy but will evolve. He noted Nvidia excels in training, yet most current computation is inference. Blackwell's performance improves many times over when considering inference models, with speed increasing tenfold and throughput rising 25 times for extended test-time inference AI models.
Huang expressed even greater enthusiasm than at CES, citing 1.5 million components in every Blackwell-based rack. He admitted the task isn't easy, but all Blackwell partners are working diligently. As Blackwell progresses, gross margins will initially be around 70%. Kress added, "We're focused on accelerating manufacturing to ensure we can deliver