Oracle and OpenAI have shelved plans to add a flagship AI data‑centre expansion near Abilene, Texas, a setback for an ambitious programme that once promised to reshape US compute capacity. The move reflects a breakdown in financing talks and shifting demand from OpenAI, underscoring how large-scale AI infrastructure projects can founder even after high‑profile political fanfare.
The aborted expansion was part of the so‑called Stargate initiative — a SoftBank, OpenAI and Oracle partnership announced with fanfare and a headline figure of up to $500 billion and 10 gigawatts of capacity. The original plan envisaged further build‑out adjacent to an existing core park in Abilene, adding as much as 600 megawatts of capacity to an already sizeable cluster of buildings run by Oracle’s cloud infrastructure unit.
Industry sources say the Abilene campus comprises eight buildings, two of which are already operational and hosting servers that OpenAI uses to train and run its models. While Oracle and OpenAI will continue to pursue a broader 4.5‑gigawatt build‑out elsewhere in the Stargate programme, the immediate incremental expansion around Abilene has been put on ice, and the extra compute will be absorbed by other parks currently under construction.
The cancellation has created an opening for Meta. The social media giant is reportedly in talks to lease the stalled Abilene expansion from developer Crusoe, with Nvidia playing a brokerage role in discussions. Nvidia’s involvement is notable: the Abilene deployments use Nvidia chips, and the company has a clear commercial interest in ensuring future capacity continues to consume its accelerators rather than competing silicon from AMD.
The episode lays bare several structural challenges in building hyperscale AI infrastructure. Projects of this sort routinely cost billions and demand complex financing, lengthy permitting and close coordination with power utilities. The Stargate programme’s headline scale — gigawatts of power and hundreds of billions in investment — magnifies those challenges and exposes participants to shifts in demand, capital markets and geopolitics.
Geopolitical and supply‑chain frictions also loom large. Policymakers have tightened export rules on advanced AI accelerators, and chip suppliers are jockeying for influence with cloud and AI firms. Nvidia’s active role in negotiating the Abilene outcome illustrates how chipmakers can shape where and how compute is deployed, a dynamic that will influence both commercial competition and national industrial strategy.
For OpenAI and Oracle the setback is inconvenient but not existential: core operations in Abilene continue, and the consortium still plans significant overall capacity. For Meta the episode is an opportunity to secure additional west‑of‑Silicon‑Valley capacity cheaply and quickly, accelerating its own ambitions to host large‑scale generative models. For local communities and utilities, however, the pause highlights the uncertainty such marquee projects bring — from promised jobs and tax revenue to the strain of integrating new, power‑hungry facilities into regional grids.
OpenAI’s head of infrastructure, Sachin Katti, framed the decision plainly on social media: the Abilene site remains one of the largest AI data‑centre parks in the United States, and while further expansion was considered, the organisation will redeploy planned additional compute to other locations. The episode is a reminder that building the backbone for the next generation of AI will be as much an exercise in project finance, supply‑chain diplomacy and energy planning as it is in model architecture.
