To hear Jensen Huang tell it, Nvidia — and artificial intelligence (AI) in general — is just getting started. The CEO of the most valuable company in the world didn’t rest his laurels on a 94% year-over-year revenue growth number for Q3 and faced up to several questions about the future of his company as well as the general prospects for AIs growth for the rest of the decade.
“Many AI services are running 24/7, just like any factory,” Huang told the earnings call audience. “We’re going to see this new type of system come online. And I call it [the company’s data centers] an AI factory because that’s really close to what it is. It’s unlike a data center of the past.
“And these fundamental trends are really just beginning. We expect this to happen, this growth, this modernization and the creation of a new industry to go on for several years.”
Huang and CFO Colette Kress clearly feel that the company’s best days are ahead of it, even as analysts question whether or not it can keep up the pace in several areas: large language model (LLM) development, AI usage scale and the torrid revenue growth it has achieved over the past two years.
Their reasons for optimism ranged from consumer adoption rates to the coming explosion of enterprise and industrial AI and the long list of companies that rely on Nvidia data centers and chips (whose manufacturing is outsourced) for their own applications.
By way of background, an AI data center is a specialized facility designed to handle the heavy computational demands of AI workloads, essentially providing the infrastructure needed to train and deploy complex machine learning models and algorithms by processing massive amounts of data using high-performance servers, specialized hardware accelerators, and advanced networking capabilities, all optimized for AI operations. In simpler terms, it’s a data center specifically built to power AI applications at scale.
If there was a theme on the call and in the earnings materials, it was that laundry list of companies from Alphabet to Meta to Microsoft to Oracle to Volvo that are hooked into Nvidia. But when that list wasn’t running, Huang and Kress faced some tough questions from analysts, ranging from scaling for LLM development to a potential controversy about reported overheating issues for the companies seven-chip Blackwell set of GPUs that it is banking its next few years on. For perspective, the company’s Q3 earnings were achieved without shipping any newly designed chips. Blackwell is the new addition, and demand, according to Kress, is “staggering.”
Despite some concerns about a potential slowdown in the scaling of LLMs, Huang maintained that there is still ample opportunity for growth. He emphasized that the scaling of foundation models is “intact and continuing,” citing ongoing advancements in post-training scaling and inference-time scaling.
Post-training scaling, which initially involved reinforcement learning with human feedback, has evolved to incorporate AI feedback and synthetic data generation. Meanwhile, inference-time scaling, demonstrated by OpenAI’s ChatGPT-01, allows for improved answer quality with increased processing time.
Huang expressed optimism about the continued growth of the AI market, driven by the ongoing modernization of data centers and the emergence of generative AI applications. He described the shift from traditional coding to machine learning as a fundamental change that will require companies to upgrade their infrastructure to support AI workloads.
Huang also highlighted the emergence of generative AI, which he likened to the advent of the iPhone, as a completely new capability that will create new market segments and opportunities. He cited examples such as OpenAI, Runway and Harvey, which provide basic intelligence, digital artist intelligence, and legal intelligence, respectively.
Nvidia’s Blackwell architecture is designed to meet the demands of this evolving AI landscape. The company has developed seven custom chips for the Blackwell system, which can be configured for air-cooled or liquid-cooled data centers and support various MVlink and CPU options.
Huang acknowledged the engineering challenges involved in integrating these systems into diverse data center architectures but remained confident in Nvidia’s ability to execute. He cited examples of successful collaborations with major cloud service providers (CSPs) such as Dell, Corweave, Oracle, Microsoft and Google.
Nvidia is also seeing strong growth in the enterprise and industrial AI sectors. The company’s Nvidia AI Enterprise platform is being used by industry leaders to build copilots and agents.
In the industrial AI space, Nvidia’s Omniverse platform is enabling the development and operation of industrial AI and robotics applications. Major manufacturers like Foxconn are adopting Omniverse to accelerate their businesses, automate workflows, and improve operating efficiency.
“The first transformative event is moving from coding that runs on CPUs to machine learning that creates neural networks that runs on GPUs,” Huang said. “The second part of it is generative AI, and we’re now producing a new type of capability that world’s never known, a new market segment that the world’s never had.”