Meta is doubling down on its AI infrastructure with a broad, multiyear partnership with NVIDIA that spans CPUs, GPUs, networking and security technologies. The collaboration targets hyperscale data centers optimized for both AI training and inference, supporting Meta’s long-term roadmap across its platforms.
For eeNews Europe readers, this matters because it shows how one of the world’s largest AI deployers is shaping future data center architectures around performance per watt, open networking and privacy-aware AI. It also highlights where semiconductor roadmaps for CPUs, GPUs and Ethernet are converging in real-world, industrial-scale deployments.
Hyperscale AI built on NVIDIA platforms
At the core of the partnership is Meta’s plan to deploy millions of NVIDIA Blackwell and Rubin GPUs, alongside expanded use of NVIDIA CPUs and Spectrum-X Ethernet networking. Meta will build hyperscale data centers designed to handle massive AI workloads while keeping efficiency front and center.
“No one deploys AI at Meta’s scale — integrating frontier research with industrial-scale infrastructure to power the world’s largest personalization and recommendation systems for billions of users,” said Jensen Huang, founder and CEO of NVIDIA. “Through deep codesign across CPUs, GPUs, networking and software, we are bringing the full NVIDIA platform to Meta’s researchers and engineers as they build the foundation for the next AI frontier.”
Meta also plans to deploy NVIDIA GB300-based systems and create a unified architecture that spans on-premises data centers and NVIDIA Cloud Partner environments. The goal is to simplify operations while scaling performance as AI workloads grow.
Performance per watt drives CPU choices
On the CPU side, Meta is expanding its use of Arm-based NVIDIA Grace CPUs for production data center applications. According to the companies, these deployments deliver significant performance-per-watt gains, aligning with Meta’s energy efficiency goals.
This marks the first large-scale Grace-only deployment, supported by joint codesign and software optimization efforts. Meta and NVIDIA are also collaborating on future NVIDIA Vera CPUs, with potential large-scale deployment as early as 2027, further extending Meta’s low-power AI compute footprint and strengthening the Arm software ecosystem.
Networking and privacy at AI scale
Networking is another key pillar. Meta is adopting NVIDIA Spectrum-X Ethernet across its infrastructure, integrating it with the Facebook Open Switching System. The platform is designed to deliver predictable, low-latency performance for AI workloads while improving utilization and power efficiency.
On the security front, Meta has adopted NVIDIA Confidential Computing for WhatsApp private processing, enabling AI-powered features while protecting user data. The companies plan to extend these confidential computing capabilities to additional use cases across Meta’s portfolio.
“We’re excited to expand our partnership with NVIDIA to build leading-edge clusters using their Vera Rubin platform to deliver personal superintelligence to everyone in the world,” said Mark Zuckerberg, founder and CEO of Meta.
If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :
eeNews on Google News
