Meta and Nvidia have expanded a long‑term partnership that will bind the social‑media giant more tightly to one of the leading suppliers of AI hardware. The two companies said they will jointly deploy “millions” of Blackwell and Rubin GPUs alongside a large‑scale introduction of Nvidia’s Grace CPUs, while integrating Nvidia’s Spectrum‑X Ethernet switches into Meta’s open switching platform. Meta will build hyperscale data centres optimised for both training and inference to support a multi‑year AI infrastructure roadmap.
The deal marks the first mass deployment of Nvidia’s Grace CPU family as a standalone server component rather than only bundled with GPUs in the same chassis. Nvidia said the collaboration involves co‑design of CPU ecosystem libraries and software optimisations intended to lift performance per watt across successive processor generations. Both firms also signalled plans to trial and later scale Nvidia’s forthcoming Vera CPU, with a view to broader deployment around 2027.
Executives framed the agreement as a blending of frontier research and industrial‑grade infrastructure. Nvidia’s chief executive argued that Meta’s scale makes it a unique partner for testing and deploying systems targeted at intelligent agents and personalised AI at global scale, while Meta’s leadership described plans to use the joint platform — dubbed Vera Rubin in public comments — to build clusters aimed at delivering more capable, energy‑efficient AI services for billions of users. Meta also said it will fold Nvidia’s security technologies into WhatsApp’s emerging AI features.
Markets reacted swiftly: Meta and Nvidia shares rose in after‑hours trading, while competitors such as AMD saw price pressure amid investor concerns about supply and customer concentration. The move comes as Meta estimates capital expenditure of up to $135 billion for the year, and analysts have flagged that a large share of that spend could go toward expanding data‑centre capacity anchored on Nvidia hardware.
Despite the depth of the pact, Meta is not putting all its eggs in one basket. The company has its own silicon efforts, continues to use AMD processors, and has explored Google’s TPUs for future data‑centre designs — a hedge that speaks to industry‑wide anxiety about single‑supplier dependence given tight production for Nvidia parts. That dynamic has prompted many AI companies to cultivate “second suppliers” even as they deepen relationships with Nvidia.
The deal reinforces several trends reshaping AI infrastructure: a move toward tightly integrated CPU‑GPU‑network stacks, growing influence of Nvidia’s platform strategy, and accelerating commercial pressure to optimise inference workloads for energy efficiency. For buyers and rivals alike, the agreement is a reminder that the winners in the next phase of AI will be those who can align chip design, system software and fleet‑level operations at hyperscale.
