NVIDIA Unveils Blackwell Ultra GB300 Amped With 288GB Of HBM3e

by · HotHardware

NVIDIA's Blackwell GB200 is an absolutely monstrous processor with up to 10 petaflops of dense FP4 tensor compute and 192GB of lightning-fast HBM3e memory delivering 8 TB/second of bandwidth per GPU. That number, though, that 192GB, is still limiting for hyperscalers building the latest, greatest AI models. For those companies that need ever greater memory specs, NVIDIA has just revealed the Blackwell Ultra GB300 GPU.

NVIDIA CEO Jensen Huang announced Blackwell Ultra today on stage at GTC, where he described it as NVIDIA's most powerful GPU to date. It certainly fits that description: NVIDIA says Blackwell Ultra offers 15 petaflops of dense FP4 tensor compute, a 50% uplift from Blackwell GB200. The real story is the RAM, though: 288GB of HBM3e memory on a single package.

That means a rack of Blackwell Ultra GB300 Superchips, such as in the NVIDIA GB300 NVL72, offers no less than 20TB of HBM memory and a staggering 1.1 exaFLOPS of FP4 compute thanks to the 72 Blackwell Ultra GB300 chips on the 36 'Superchips' inside. As before, each pair of Blackwell Ultra GPUs is paired with a Grace CPU and its supply of LPDDR5X memory, as well as some 130 TB/s of NVLink connectivity between superchips and 14.4 TB/sec of off-rack networking.

Of course, as before, the "point" of the NVL72 racks is that they operate as a single massive "GPU". Indeed, Huang refers to the GB300 NVL72 as "the ultimate scale-up," for which he says there is "no replacement." Beyond the spec upgrades, Blackwell Ultra is apparently also coming along with new instructions to accelerate attention in AI computations that NVIDIA says result in double the performance for that specific workload.

Jensen's on-stage presentation was heavily focused on defining datacenters as "AI factories," to the point that Huang claims that all "businesses with factories" will soon need to have "two factories": one for production, and one for AI production, to make the AI that powers the products. He further claimed on stage that Blackwell offers 40 times the "token revenue" versus Hopper when configured this way.

Blackwell Ultra is coming in the second half of 2025, in the form of NVL72 racks as well as HGX B300 NVL16 systems with sixteen GPUs inside. Of course, those NVL72 racks can also be organized into DGX SuperPODs with as many Blackwell Ultra GPUs as you can afford. As Jensen says, "the more you buy, the more you save." For cloud service providers hosting AI services, that may actually be the case.