Meta has quietly scaled back an ambitious effort to build a top‑tier artificial‑intelligence training chip, a setback that highlights how difficult it is to displace Nvidia at the heart of the modern AI stack. Engineers working on Meta’s Meta Training and Inference Accelerator (MTIA) project abandoned a planned, high‑end design codenamed Olympus after encountering persistent design, manufacturing and software‑stability challenges. The company will instead focus on a simpler internal design while leaning more heavily on external suppliers.
The move follows earlier internal reversals: an earlier MTIA design, Iris, was also shelved. Olympus had been slated for completion of its design phase in the fourth quarter of 2026, but technical complexity and risk pushed Meta to reassess its timetable and ambitions. In public, Meta stresses continued investment in a diversified silicon portfolio for its own needs, but the company’s recent procurement deals tell a clearer story of pragmatic retreat.
In the past month Meta struck large deals with industry chipmakers. It announced a pact with AMD to buy roughly $60 billion of AI accelerators and has agreements to purchase current and future Nvidia chips. Meta has also signed a multibillion‑dollar arrangement with Google to lease accelerators for model development. Those purchases could begin to be delivered as early as next year, depending on deployment schedules.
For Meta the calculation is straightforward: partnering with established suppliers speeds access to the best available compute, reduces short‑term risk and preserves capital for other data‑centre investments. Meta previously forecast capital expenditure for 2026 of about $115–135 billion, much of it earmarked for chips and servers. Converting an in‑house design into a production‑worthy product that competes on price, performance and software integration against Nvidia’s dominant ecosystem proved more costly and uncertain than expected.
More broadly, Meta’s experience is emblematic of a wider industry trend. Several large technology companies have attempted bespoke silicon projects to capture cost savings and control the stack, but many have hit barriers involving fabrication risk, driver and software maturity, and the network effects of Nvidia’s CUDA ecosystem. Nvidia’s chief executive has publicly argued that many big firms will abandon their chip projects because they cannot match Nvidia’s performance and software ecosystem — a prediction that Meta’s latest decision appears to validate.
The implications extend beyond Meta’s balance sheet. Increased purchases from AMD, Nvidia and Google consolidate demand in a market already dominated by a handful of suppliers, giving those vendors greater leverage over pricing and roadmaps. For Meta, the trade‑off is between the long‑term strategic prize of vertically integrated hardware and the near‑term necessity of securing enormous compute capacity to train and deploy generative models.
A successful in‑house AI chip would have offered Meta tighter control over costs, data‑centre architecture and possibly differentiation in model deployment. Abandoning a high‑end design reduces that upside and keeps Meta tethered to partners whose strategic priorities do not always align with its own. Yet the decision also frees Meta to scale AI work faster with proven silicon, which matters in a market where time to model iteration is a competitive advantage.
In short, Meta’s pivot demonstrates the formidable barriers to becoming a chipmaker at Nvidia’s level and reinforces the centripetal force of incumbent ecosystems. Expect more large cloud and AI players to favour hybrid strategies that pair selective internal designs with deep commercial relationships with chip vendors, at least until alternative toolchains and foundry pathways mature sufficiently to narrow the gap.
