Nvidia stole the CES spotlight this week with a platform designed to redefine how the world builds and runs AI.
On Jan. 5, the chipmaker pulled back the curtain on Vera Rubin, a new AI computing platform designed to power the world’s largest and most demanding AI systems at far lower cost.
According to Nvidia, the Rubin platform is powered by six tightly integrated chips that together function as a single AI supercomputer. These include the NVIDIA Vera CPU, NVIDIA Rubin GPU, NVLink 6 switch, ConnectX-9 SuperNIC, BlueField-4 DPU, and Spectrum-6 Ethernet switch.
Nvidia says this “extreme codesign” approach — engineering hardware and software together from the ground up — cuts AI training time and reduces the cost of generating AI tokens during inference. Compared with the current Blackwell platform, Rubin can lower inference token costs by up to 10 times and train large mixture-of-experts (MoE) models using four times fewer GPUs.
“Rubin arrives at exactly the right moment, as AI computing demand for both training and inference is going through the roof,” Jensen Huang, founder and CEO, said in a statement. “With our annual cadence of delivering a new generation of AI supercomputers — and extreme codesign across six new chips — Rubin takes a giant leap toward the next frontier of AI.”

Designed for reasoning and agentic AI
Named after astronomer Vera Florence Cooper Rubin, the platform is built with a focus on advanced reasoning and agentic AI.
Nvidia says Rubin introduces five major innovations, including a new generation of NVLink interconnects, an updated Transformer Engine, Confidential Computing, a RAS Engine for reliability, and the new Vera CPU. Together, these technologies aim to handle long-context AI workloads and large-scale MoE models more efficiently than previous architectures.
The flagship system, the Vera Rubin NVL72, combines 72 Rubin GPUs and 36 Vera CPUs in a rack-scale design. Nvidia will also offer the HGX Rubin NVL8 platform for x86-based AI servers.
Industry adoption and ecosystem
The Rubin platform is already drawing support from across the AI ecosystem. Nvidia says expected adopters include major cloud providers, AI labs, and hardware partners such as AWS, Google, Microsoft, Meta, OpenAI, Anthropic, CoreWeave, Dell, HPE, Lenovo, and Oracle.
The company also noted that the Rubin platform is already in full production and systems based on Vera Rubin are expected to become available from partners in the second half of 2026, with early deployments planned by major cloud providers, including AWS, Google Cloud, Microsoft Azure, and Oracle Cloud Infrastructure.
Follow our live CES 2026 coverage for ongoing announcements, new devices, and emerging tech themes.

