Meta Platforms has signed an agreement with Amazon Web Services to deploy AWS’s custom Graviton processors at scale, deepening a long-standing partnership as the Facebook parent builds out infrastructure for its next generation of artificial intelligence systems.
The deployment will begin with tens of millions of Graviton processor cores, with capacity to expand further as Meta’s AI needs grow, the companies said on Monday.
The move underscores the increasing role of specialised central processing units in AI infrastructure. While graphics processing units remain the backbone of training large AI models, emerging “agentic” AI systems—which can reason, plan and execute multi-step tasks autonomously—also require vast CPU resources for workloads such as real-time reasoning, code generation, search and task orchestration.
Meta said the Graviton chips would support a range of workloads across its operations, including AI applications that must handle billions of interactions while coordinating complex, multi-step processes.
AWS’s latest Graviton5 processor is designed for such tasks. Built on 3-nanometre technology, the chip features 192 cores and a cache five times larger than its predecessor. Amazon said it delivers up to 25% better performance than the previous generation, while improving energy efficiency.
The chips run on the AWS Nitro System, Amazon’s cloud infrastructure platform that provides high-performance, secure access to hardware. Graviton5 instances also support Elastic Fabric Adapter technology, enabling low-latency, high-bandwidth communication between servers—an important feature for large-scale distributed AI workloads.
“As we scale the infrastructure behind Meta’s AI ambitions, diversifying our compute sources is a strategic imperative,” Santosh Janardhan, Meta’s head of infrastructure, said in a statement.
Nafea Bshara, vice president and distinguished engineer at Amazon, said the expanded deployment showed how purpose-built silicon could support the next wave of AI development.
The agreement also highlights growing industry focus on energy-efficient computing as AI demand surges, with technology companies seeking to balance performance, cost and sustainability goals.



