AI’s Hunger for Memory Could Keep Global Chip Shortages Dragging On Until 2027

Synopsys CEO Sassine Ghazi warns that the current memory-chip shortage, driven by heavy demand from AI data centres, is likely to last through 2026 and potentially into 2027. Concentrated production, long lead times for new fabs and booming demand for HBM mean elevated prices and allocation pressures may persist, benefiting memory suppliers but squeezing device makers and other industries.

A close-up of a vintage motherboard highlighting microchips and electronic components.

Key Takeaways

  • 1Synopsys CEO Sassine Ghazi says memory shortages driven by AI infrastructure could persist until 2026–2027.
  • 2Most high-end DRAM and HBM capacity from Samsung, SK Hynix and Micron is being allocated to AI data centres.
  • 3Adding new production capacity typically takes about two years, prolonging the tight market.
  • 4Rising memory prices risk higher consumer electronics costs and supply constraints for automotive and industrial sectors.
  • 5Memory manufacturers and EDA/IP suppliers stand to gain, while OEMs and downstream users face margin and supply risks.

Editor's
Desk

Strategic Analysis

The sustained diversion of premium memory to AI infrastructure transforms a familiar semiconductor cycle into a strategic contest. If AI providers continue to dominate allocations, memory prices and scarcity will become a structural constraint on broader tech adoption rather than a transient market blip. Policymakers seeking digital sovereignty may accelerate subsidies and domestic capacity programs, but those projects also take years. Corporates should prioritize long-term supply contracts, invest in alternative architectures that reduce HBM reliance, and consider regional diversification. Investors should expect memory incumbents to enjoy outsized cashflows in the near term, while design-tool vendors and cloud providers will remain pivotal players in determining how the next tranche of capacity is deployed.

China Daily Brief Editorial
Strategic Insight
China Daily Brief

Synopsys chief executive Sassine Ghazi has delivered a blunt update to an industry already braced for disruption: the current squeeze on memory chips is unlikely to ease before 2026 and may persist into 2027. Ghazi says much of the high-end DRAM and high-bandwidth memory (HBM) being produced by the world’s leading suppliers is being absorbed by AI data centres and infrastructure projects, leaving little spare capacity for smartphones, laptops and other consumer and industrial markets.

That observation matters because Synopsys sits at the centre of the chip ecosystem. The California-based electronic design automation (EDA) firm supplies the software and IP that chipmakers and system designers use to build silicon. Its CEO’s read on allocation and capacity is therefore informed by conversations with both foundries and the large chip designers racing to supply AI accelerators and servers.

The mechanics of the squeeze are straightforward. Training and inference at scale consume vast amounts of fast, wide memory, especially HBM. Cloud providers and AI infrastructure builders have committed tens of billions of dollars to expand data-centre capacity, and much of the incremental demand for premium memory is going to those projects. Supply is concentrated: Samsung, SK Hynix and Micron still dominate high-end DRAM and HBM production, and expanding a fabrication line takes time and capital.

Ghazi notes that from investment decisions to actual production usually requires about two years — a delay that helps explain why shortages can become prolonged. Memory markets have long been cyclical, swinging between shortages and gluts, but several executives and analysts now describe the current phase as a “supercycle” driven by secular AI demand rather than the shorter-term consumer cycles of the past.

The knock-on effects are visible down the supply chain. Rising memory prices push component costs for consumer electronics higher, creating inflationary pressure that manufacturers may pass on to buyers. Other sectors — notably automotive and industrial applications that increasingly rely on bespoke memory for safety-critical functions — risk being deprived of supply as priorities tilt towards AI infrastructure.

Geopolitics and industrial economics amplify the problem. High-end memory production is capital-intensive and geographically concentrated in East Asia; export-control regimes, trade frictions and government subsidies all influence where new capacity is planned and how quickly it comes online. For companies and governments seeking resilience, the options are costly: build more fabs, diversify suppliers, or redesign systems to rely less on scarce memory types.

For investors and corporate strategists the current backdrop is a mixed opportunity. Memory majors are in a strong pricing position and have called this period a “golden era.” EDA firms and IP suppliers like Synopsys stand to benefit from sustained chip design activity. But original equipment manufacturers face margin pressure and possible product delays, while chip buyers may accelerate efforts to secure supply through long-term contracts or strategic investments.

The upshot is that the memory market is poised to remain a choke point for technology deployment. Unless producers radically accelerate capacity expansion or demand moderates, bottlenecks and elevated prices for high-bandwidth memory will likely be features of the market for the next 18–24 months, shaping the pace and cost of AI roll-outs and affecting a wide range of consumer and industrial products.

Share Article

Related Articles

📰
No related articles found