On a stock chart, the “pivot” appears dramatic, but in Meta’s world, it begins with something more commonplace: heat, power, and floor space. You quickly realize that this is no longer just a software story when you walk through a modern data center, with its bright white aisles, steady whine of fans, and faint metallic smell that clings to new racks. Even after promoting massive Nvidia deployments, Meta’s choice to strengthen its relationship with AMD comes across less as a betrayal and more as a business attempting to avoid being held captive by supply chains and physics.
The headline version is straightforward: Nvidia was “traded” for AMD by Zuckerberg. It matters because the real world is messier. Millions of Nvidia Blackwell GPUs and upcoming Rubin GPUs, as well as Nvidia CPUs and networking, are all part of Meta’s long-term infrastructure partnership with Nvidia—an exceptionally broad embrace.
| Category | Details |
|---|---|
| Person | Mark Zuckerberg |
| Company | Meta Platforms, Inc. |
| Chip Partners in Focus | Nvidia, AMD |
| What Changed | Meta expanded from a largely Nvidia-centered AI buildout to a deliberate multi-vendor strategy including AMD |
| Reported Scale | Up to 6 gigawatts of AMD computing capacity for Meta, with deployments beginning late 2026 |
| Reported Deal Size | Up to ~$60B over five years (reported) |
| Meta 2026 Capex Guidance | $115B–$135B (reported) |
| Reference | https://about.fb.com/news/2026/02/meta-amd-partner-longterm-ai-infrastructure-agreement/ |
Then, almost immediately after, Meta reverses course and secures a huge AMD commitment, committing up to six gigawatts of AI computing, beginning with about one gigawatt in late 2026. According to Reuters, the deal could total up to $60 billion over five years. It appears that investors think the “either/or” narrative is the truth. It seems like the story is about “both.”
Simple leverage is one of the causes. When it comes to frontier AI training, Nvidia has been the go-to solution, and its prices reflect this. You can practically picture the procurement meetings when a buyer is spending at Meta’s scale—capex is estimated by Reuters to be between $115 billion and $135 billion for 2026—with fluorescent lighting, a long conference table, and someone silently moving a spreadsheet across the surface. Meta wants more than chips when it comes to that kind of budget. It seeks clarity: roadmaps that line up, delivery windows that don’t change, and terms that don’t feel like a penalty for arriving late to the AI party.
The fact that the AI boom is beginning to divide into two distinct issues is another factor. One is training large models. Another is to run them at frightening scale (inference), cheaply, and reliably. AMD and Meta have discussed long-term deployment and “efficient inference compute,” focusing on what will happen once the model is developed and the real world begins bombarding it with billions of requests. Perhaps that’s where Meta’s true strength lies: not only does it have a clever model, but it can also provide it to Instagram and WhatsApp users without destroying their finances each time a user requests a caption.
Even seasoned infrastructure professionals are taken aback by the six-gigawatt figure. It is an electrical identity rather than a cluster. It suggests cooling systems, substations, land transactions, local politics, and those thick, ship-like power cables with ribs.
A subtly provocative indication is also conveyed by dedicating that much capacity to AMD silicon: Meta believes AMD’s Instinct line is strong enough to be more than a backup option. With performance and technical benchmarks built in, Reuters linked the agreement to AMD’s impending MI450 hardware and larger CPU projects. That particular detail is significant because it implies co-engineering rather than impulsive buying.
Additionally, there is a defensive reasoning that seems very 2026. With real-time corporate alliance formation, export restrictions, and national subsidies, the AI supply chain has turned into a geopolitical object. One vendor, even a well-liked one, becomes a single point of failure when you bet on them.
Furthermore, it’s still unclear if the next “shortage” will be in power transformers, memory bandwidth, networking equipment, GPUs, or just engineers who know how to wire these systems without melting them. Although using multiple vendors increases Meta’s flexibility in the event that something unavoidably breaks, it does not completely eliminate risk.
For its part, Nvidia isn’t doing nothing. Reuters reported a Meta deal where Nvidia supplies CPUs independently of GPUs, and Jensen Huang has been publicly preparing investors for a more intense battle in CPUs, promoting the idea that AI deployment will rely more heavily on CPU horsepower than many had anticipated.
By claiming ownership of the rack rather than just the accelerator, Nvidia appears to be attempting to expand the battlefield. Accordingly, Meta’s AMD move appears to be a proactive rejection of allowing any one company to define the entire stack.
However, the underlying message that everyone is spending as if the AI era has no limits is what makes the moment feel tense. That assurance might be warranted, or it might turn out to be the overbuild that tech occasionally employs when the narrative is too strong to ignore.
It’s hard to ignore how industrial this has become when you see how rapidly Meta is investing in physical infrastructure—contracts measured in gigawatts, budgets measured in hundreds of billions. The switch from Nvidia to AMD is not the only one. It’s from the “move fast” culture to the power-plant culture, where a defective transformer cannot be fixed with a patch.
Why, then, did Zuckerberg “trade” AMD for Nvidia? Because leverage is the only language that works when demand is this crowded, because inference economics is becoming a knife fight, and because Meta wants more compute than any one supplier can comfortably promise. It might not even be a judgment on Nvidia’s technology, which is the crazy part. Meta may be publicly acknowledging that ideas are no longer the most valuable resource in AI. Its capacity.










