Nvidia is rolling out a new series of artificial intelligence (AI) products at its annual developer conference in San Jose, California, as it strives to strengthen its reputation as the top choice for companies specializing in AI technologies.
On Monday (March 18), Nvidia unveiled its latest generation of AI chips and software designed to power AI models. The company also introduced foundational models for humanoid robots.
Following the AI surge triggered by OpenAI’s ChatGPT in late 2022, Nvidia has seen its stock value increase fivefold, with its sales tripling. The company’s advanced server GPUs are pivotal for developing and applying large-scale AI models, attracting substantial investments from tech giants such as Microsoft and Meta, who have spent billions on these chips.
Read more: AI Frenzy Catapults Nvidia’s Value Toward Market Summit
Nvidia’s latest AI graphics processors, Blackwell, marks a new era in AI hardware. The debut chip of this series, the GB200, is anticipated to be available later this year. By introducing more potent chips, Nvidia aims to boost new orders amidst ongoing high demand for its current generation of GPUs, like the “Hopper” H100s.
Nvidia said its Blackwell-based processors, such as the GB200, represent a significant leap in performance for AI firms, boasting 20 petaflops of AI capability, compared to the four petaflops offered by the H100. According to Nvidia, this enhanced power allows AI organizations to develop more complex models.
The chip includes what Nvidia refers to as a “transformer engine,” which is specially designed to operate transformer-based AI technologies, a key component behind ChatGPT’s functioning.
“The B200 boasts a significant performance leap over its predecessors,” Lars Nyman, chief marketing officer at CUDO Compute, told PYMNTS in an interview. “This translates to AI tasks like training complex models, running simulations and analyzing massive datasets being completed in a fraction of the time.”
The increased efficiency of the B200 could make AI more accessible to a broader range of businesses and organizations, Nyman said.
“Increasingly computationally intensive tasks now become feasible, allowing smaller players to leverage AI for tasks previously limited to large corporations,” he added. “With the B200, entirely new applications of AI become possible. We could see more sophisticated chatbots, highly realistic simulations and even breakthroughs in areas like drug discovery and materials science.”
The new AI chips could power mechanical items as well as software, as Nvidia looks toward robotics.
Nvidia is jumping into the field of humanoid robotics with its new foundation model, Project GR00T. The AI system, trained on vast datasets, is supposedly versatile enough to handle a multitude of tasks, ranging from generating text to producing videos and images.
Project GR00T is designed to enable humanoid robots to grasp natural language and replicate human movements by observing and learning from human actions. This capability allows these robots to quickly acquire skills such as coordination and dexterity, which are necessary for navigating, adapting and interacting within the human environment.
Alongside Project GR00T, Nvidia unveiled two contributions to its Isaac robotics platform: the Isaac Manipulator and the Isaac Perceptor.
The Isaac Manipulator is a collection of foundation models tailored for the control of robotic arms, aiming to enhance their efficiency and versatility. Meanwhile, the Isaac Perceptor aims to equip robots with advanced visual capabilities through “multi-camera, 3D surround-vision,” particularly benefiting robots in manufacturing and fulfillment operations.
Nvidia also launched the Jetson Thor computer, a potent computing solution explicitly designed for humanoid robots. Powered by Nvidia’s Thor system-on-a-chip, Jetson Thor is poised to be the central processing unit for humanoid robots, offering a modular architecture that emphasizes performance, energy efficiency and compactness. Nvidia’s statement highlighted Jetson Thor’s role in facilitating complex tasks and ensuring smooth, safe interactions between people and machines.
Robots aren’t the only thing Nvidia is focusing on, as it turns to its core customer enterprise base.
Nvidia also introduced Nvidia Inference Microservice (NIM), as part of its Nvidia enterprise software.
NIM is designed to simplify the use of older Nvidia graphics cards to run AI programs, also called inference. This means businesses can keep using the many Nvidia graphics cards they already have. Running AI programs this way doesn’t require as much computer power as starting to train a new AI model from scratch.
NIM aims to help companies that want to run their AI projects on their own instead of paying other companies like OpenAI for AI services. Nvidia hopes this will encourage more businesses to buy Nvidia-based servers and subscribe to its enterprise service, which costs $4,500 per GPU each year.
Nvidia is also planning to work closely with big AI companies, such as Microsoft and Hugging Face, to make sure their AI programs work well on Nvidia’s graphics cards. With NIM, developers can easily use these AI programs on their own servers or Nvidia’s cloud servers without having to go through a complicated setup process. The software will let AI run on laptops with Nvidia graphics cards, not just big servers in the cloud, to make it easier for more devices to run AI programs directly.
Nvidia’s technology will also help power digital twins, which provide a virtual model of a physical object, process, system or environment.
Nvidia announced it is bringing its technology to Apple’s Vision Pro AR/VR headset. This move aims to allow developers to leverage Nvidia’s Omniverse tools within an AR/VR framework through the Vision Pro. Nvidia’s Omniverse platform provides developers with the capability to craft digital twins of physical objects and environments, enabling them to predict their real-world behaviors and appearances.
Nvidia said that with the Omniverse tool, organizations can design digital versions of structures like factories, allowing them to visualize and plan the movement of workers within these spaces before any actual construction begins.