Speaking at NVIDIA’s GTC conference in San Jose on March 17, SK Group chairman Chey Tae-won delivered a stark prognosis for the semiconductor industry: the global shortage of memory chips—particularly the high-bandwidth memory (HBM) used by artificial-intelligence accelerators—could persist until 2030. He attributed the problem to systemic production bottlenecks and said market demand from AI workloads has pushed shortages above 30% for certain AI-focused storage chips. Chey warned that DRAM, NAND and HBM prices are likely to rise and remain elevated for an extended period as wafer capacity and upstream supply chains struggle to catch up.
The comments echoed remarks Chey made in Washington last month and reinforced a familiar theme for memory makers: surging AI demand is colliding with long lead times for new fab capacity. SK Hynix, one of the world’s largest memory manufacturers and a key supplier of HBM to NVIDIA, told investors it will need at least four to five years to expand wafer output sufficiently to relieve the squeeze. The company is reportedly weighing a U.S. ADR listing to broaden its investor base and may announce measures to stabilise DRAM prices in the near term.
NVIDIA’s presentations at GTC amplified the dynamic. CEO Jensen Huang forecast enormous demand for next‑generation AI chips, a message that helped lift South Korean semiconductor stocks; Samsung and SK Hynix shares rallied after Huang revealed a new Grok3 LPU chip produced by Samsung and reiterated bullish long‑term AI demand projections. That bullish outlook for AI compute capacity helps explain why HBM—a small, specialised but indispensable component for accelerators—has become a chokepoint: HBM stacks are complex, require advanced packaging and rely on constrained wafer, foundry and test resources.
The practical implications of a prolonged memory squeeze are wide. Higher DRAM/NAND prices would raise costs for cloud providers and data centres, translating into more expensive training and serving of large AI models. Industries from smartphones to enterprise storage may face delayed product cycles or higher prices as memory is diverted to high‑value AI customers. Politically, the strain underscores why governments and companies are racing to secure domestic or allied supply chains for advanced semiconductors and the equipment and chemicals that feed them.
Several structural factors explain why relief would be slow. Building new wafer fabrication and advanced packaging capacity takes years and requires massive capital investment, specialised equipment (much of it produced by a handful of firms), and a skilled workforce. The HBM supply chain is also congested by capacity constraints at back‑end assembly, thermal interface materials and through‑silicon via processes that cannot be scaled overnight. Even with sizeable capex commitments, companies face long lead times before new capacity produces the tight, high‑yield memory required by AI accelerators.
For investors and policy makers, Chey’s forecast is both a warning and a signal. For memory vendors, the prospect of sustained high prices justifies rapid investment and strategic partnerships; for customers, it suggests the need to diversify procurement and consider architectural changes—such as more efficient memory usage or alternative memory hierarchies—to reduce dependence on scarce HBM. For governments, the shortage strengthens the case for subsidies, export controls and collaboration to shore up critical nodes in the semiconductor supply chain.
In the near term, expect price volatility and bidding wars for production slots. In the medium term, market shares could shift toward firms that successfully scale HBM capacity or pursue vertical integration. And in the longer run, persistent shortages would accelerate strategic competition over chip production capacity and could reshape how AI systems are architected to be more memory‑efficient.
