GPU who? Meta to deploy Nvidia CPUs at large scale
CPU adoption is part of deeper partnership between the Social Network and Nvidia which will see millions of GPUs deployed over next few years
by Tobias Mann · The RegisterMove over Intel and AMD — Meta is among the first hyperscalers to deploy Nvidia's standalone CPUs, the two companies revealed on Tuesday. Meta has already deployed Nvidia's Grace processors in CPU-only systems at scale and is working with the GPU slinger to field its upcoming Vera CPUs beginning next year.
Apart from a few scientific institutions, Nvidia's nearly three-year-old Grace CPUs have predominantly shipped as part of so-called "Superchips" which featured integrated Hopper and Blackwell GPUs onboard.
These are nothing new for Meta. The Social Network has previously deployed Nvidia's Grace-Hopper Superchips as part of its Andromeda recommender system and now plans to field "millions" of Nvidia GB300 and Vera Rubin Superchips, the companies said today.
What has changed is Meta is now using Nvidia's CPU-only Grace systems to power both general purpose and agentic AI workloads that don't require a GPU.
"What we found is that Grace is an excellent backend datacenter CPU. It can actually deliver 2x the performance per watt on those back end workloads," Ian Buck, Nvidia's VP and General Manager of Hyperscale and HPC, said during a press briefing ahead of Tuesday's announcement. "Meta has already had a chance to get on Vera and run some of those workloads, and the results look very promising."
As a refresher, Nvidia's Grace CPUs are equipped with 72 Arm Neoverse V2 cores clocked at up to 3.35 GHz. The chip is available in a standalone config with up to 480 GB of memory or two of the processors can be combined with up to 960 GB of LPDDR5x memory to form the Grace-CPU Superchip.
LPDDR5x isn't something you typically see in server platforms, but offers several advantages in terms of space and bandwidth. The Grace-CPU Superchip boasts up to 1 TB of memory bandwidth between the two dies.
Nvidia's Vera CPU, officially unveiled at CES earlier this year, boosts core counts to 88 custom Arm cores and adds support for simultaneous multi-threading, aka hyperthreading, and confidential computing functionality.
According to Nvidia, Meta will take advantage of the latter capability for private processing and AI features in its WhatsApp encrypted messaging service.
Meta's adoption of Nvidia CPUs runs counter to the broader industry, which has increasingly pivoted to custom Arm CPUs like Amazon's Graviton or Google's Axion.
Alongside Meta's new CPU deployments, the hyperscaler will deploy more Nvidia GPUs and Spectrum-X network, something we'd kind of figured considering the tech titan's $115 billion to $135 billion capex target for 2026.
Nvidia is being rather tight-lipped about the scale of the expanded collab. However, El Reg understands the deal will contribute tens of billions to its bottom line. A little back-of-the-napkin math tells us that at north of $3.5 million per rack, a million GPUs works out to about $48 billion, which is a big number even for Nvidia. By way of comparison, the company earned $31.9 billion in net income on revenues of $57 billion in the quarter ended Oct. 26. It'll report Q4 earnings later this month.
Meta isn't just an Nvidia shop these days. The company has a not insignificant fleet of AMD Instinct GPUs humming away in its datacenters and was directly involved in the design of AMD's Helios rack systems, which are due out later this year.
Meta is widely expected to deploy AMD's competing rack systems, though no formal commitment has yet been made. ®