Samsung Electronics has scheduled the start of production for its next‑generation high‑bandwidth memory, HBM4, in February and is poised to supply the chips to major GPU makers including Nvidia and AMD after passing customer validation. The move marks a significant escalation in the battle for HBM share, where Samsung has trailed long‑time leader SK Hynix but has been steadily clawing back ground through a business overhaul and accelerated product development.
HBM is a specialised stacked DRAM used alongside high‑performance accelerators to feed massive AI models with bandwidth that conventional memory cannot provide. Samsung’s entry into HBM4 production will widen the field of suppliers for the next wave of AI‑centric accelerators and is likely to affect pricing, supply timelines and design choices at Nvidia, AMD and other customers planning systems around the latest GPUs.
Market reaction was immediate: Samsung’s stock ticked up in early trading while SK Hynix shares slid, reflecting investor reassessment of competitive positioning. Samsung reported a blowout preliminary operating profit for Q4 2025 — about 20 trillion won (~$13.8bn), a 208% year‑on‑year jump — a result executives attributed to a memory price upswing driven by AI demand; the company will publish full results on Thursday alongside SK Hynix, when both may disclose HBM4 order details.
Samsung says its HBM4 has cleared final quality tests for Nvidia and AMD and will begin shipments next month, though the exact volumes have not been disclosed. That validation is consequential: device makers generally require extensive co‑validation to certify memory stacks for power, thermal and signal integrity in their GPUs and accelerators, and passing those tests shortens the path from silicon tape‑out to commercial deployment.
The competitive backdrop matters. SK Hynix has dominated the HBM market for years and secured long‑term commitments for 2026 supply, and it is moving wafers into a new M15X fab intended for HBM production. But Samsung’s market share has risen from roughly 13% in early 2025 to over 20% by the third quarter, and analysts now expect Samsung’s share to exceed 30% this year if ramps proceed as projected — a shift that would materially narrow SK Hynix’s lead.
For Nvidia, which has declared its next platform (Vera Rubin) in full production and plans shipments in the second half of 2026, access to qualified HBM4 from multiple suppliers reduces single‑source risk and gives Nvidia leverage in pricing talks. For the broader AI hardware ecosystem, earlier and broader availability of HBM4 will accelerate system upgrades that demand higher memory bandwidth and capacity, potentially compressing timeframes for model scaling and deployment.
Uncertainties remain. Neither Samsung nor SK Hynix has disclosed initial HBM4 volumes or long‑term allocation agreements, and SK Hynix’s prior negotiations for 2026 supply could preserve it as the primary source for certain customers. The initial market will also be shaped by manufacturing yields, wafer supply, and how quickly GPU makers convert validated memory into production systems.
In short, Samsung’s HBM4 ramp is a watershed moment for an increasingly concentrated memory segment that sits at the heart of the AI hardware boom. If Samsung sustains its momentum, the memory landscape for AI accelerators may become more competitive, improving supply security for GPU vendors and altering margins across the memory value chain.
