Google Doubles Down on AI Compute: $175–185bn CapEx, Gemini Adoption, and a Cloud Surge

Alphabet reported strong 2025 results and announced a dramatic increase in 2026 capital expenditure — $175–185 billion — to scale AI compute and data‑centre capacity. Gemini 3 adoption, a 48% jump in cloud revenue, and falling model service costs underpin management’s argument that the spending is necessary to meet surging AI demand.

Scrabble tiles arranged to spell 'PRO GEMINI' on a wooden table, ideal for creativity themes.

Key Takeaways

  • 1Alphabet plans $175–185 billion of capex in 2026, prioritising AI compute, servers (60%) and data‑centre/network infrastructure (40%).
  • 2Google Cloud revenue surged 48% in Q4 to $17.7 billion, with annualised revenue above $70 billion and a $240 billion backlog of committed orders.
  • 3Gemini 3 adoption accelerated rapidly: 750 million MAUs for the Gemini app, 8 million paid Gemini Enterprise seats, and model token processing exceeding 10 billion tokens per minute.
  • 4Management says more than half of ML compute will serve cloud customers; TPU accelerators remain a cloud differentiator and are not being commercialised as standalone hardware.
  • 5Chief near‑term risk is compute bottlenecks — electricity, LAN and supply chains — which the 2026 capex is intended to address.

Editor's
Desk

Strategic Analysis

Alphabet’s decision to commit the best part of two hundred billion dollars to capex next year is a strategic inflection point for the cloud and AI industry. The company is effectively turning physical infrastructure — servers, power, cooling and networks — into a competitive moat around its software and model stack. That will intensify demand for chips and data‑centre inputs and could push up prices and lead times for smaller cloud providers and enterprise buyers. For customers, greater capacity and lower model costs should enable more ambitious AI deployments, but the transition risks are material: higher depreciation, tighter short‑term margins and the operational complexity of building bespoke facilities globally. Politically and regulatorily, the move tightens Alphabet’s market position, making scrutiny of competitive effects and data access more likely. Ultimately, this is a high‑stakes, capital‑intensive push to own the full stack — if Alphabet executes, it can entrench a leadership position in enterprise AI; if it stumbles, those same investments could weigh on returns for years.

China Daily Brief Editorial
Strategic Insight
China Daily Brief

Alphabet ended 2025 with a clear message to markets and rivals: scale compute or fall behind. In its quarterly earnings call the company said it will target $175–185 billion of capital expenditure in 2026, a near‑doubling of prior annual outlays, and framed that spending as a defensive and offensive necessity to support AI models, data centres and networking. Sundar Pichai and the finance team laid out where the money will go — roughly 60% to servers and 40% to data‑centre and network build‑out — and emphasised that more than half of their machine‑learning compute will be allocated to Google Cloud customers.

The financial scorecard was strong enough to justify the ambition: 2025 revenue surpassed $403 billion and Q4 consolidated revenues rose 18% year‑on‑year. Google Cloud posted a blistering 48% revenue increase in Q4, hitting $17.7 billion for the quarter and an annualised run‑rate north of $70 billion. Search grew 17% to $63.1 billion for the quarter, YouTube annual revenue topped $60 billion, and consumer subscriptions exceed 325 million paying users.

Technical metrics were trotted out as proof that the investment is working. Gemini 3 — Alphabet’s flagship multimodal model — became the fastest‑adopted model in the company’s history. The Gemini app now claims 750 million monthly active users and Gemini Enterprise has sold over eight million paid seats. The company says token throughput for its models exceeds 10 billion tokens per minute and that unit model service costs have fallen 78% through optimisation and efficiency measures.

Pichai used the call to rebut a growing industry thesis that large models will hollow out SaaS vendors’ pricing power. He argued Gemini is an enabling technology for software companies, not a substitute: many top SaaS firms are deepening integrations, using Gemini to improve product experience, automate workflows and drive growth rather than surrendering commercial leverage. Google also reiterated that its in‑house TPU accelerators are a cloud differentiator—not a stand‑alone product for external sale—and that the company is prioritising end‑to‑end efficiency rather than commoditising hardware.

The call also exposed raw operational constraints. Pichai named compute bottlenecks — electricity, local area networking and supply‑chain limits — as the company’s principal near‑term worry. The firm acknowledged a multi‑year lag between ordering hardware and seeing usable capacity, which helps explain the urgency and scale of the 2026 capex plan. Alphabet stressed that investments are not simply wasteful spending but necessary to meet surging internal and external demand for training and inference.

Beyond core cloud and models, Alphabet presented a broad commercialisation agenda. Google is embedding Gemini across Search, Workspace and YouTube, testing ad formats in the new AI search experiences (including “Direct Offers” in AI responses), and promoting a nascent open standard — the Universal Commerce Protocol — to enable agentic commerce and seamless transactions. Waymo continued to expand services, surpassing 20 million rides and entering new markets, while a strategic cloud partnership with Apple to co‑develop a base model was announced as a notable win for Google Cloud’s distribution story.

For investors the headline figure will dominate: capex guidance of $175–185 billion for 2026 raised alarms and had the market reacting sharply after hours. Yet management pushed back, pointing to robust free cash flow in 2025 ($73.3 billion for the year), high operating margins and a disciplined capital allocation framework. CFO Anat Ashkenazi emphasised rigorous investment approvals and internal efficiency programmes — including heavy use of AI to generate code and automate operations — to offset the rising depreciation and operating costs that accompany new data centres.

The policy and industrial implications are wider than Alphabet’s balance sheet. A near‑doubling of hyperscaler capex to build AI compute will accelerate demand for GPUs, advanced packaging, power and cooling solutions and long‑lead infrastructure components. That creates opportunities for chipmakers, data‑centre builders and energy providers, while worsening supply tensions in the near term. Regulators and competitors will be watching closely; such scale reinforces Alphabet’s position in cloud and AI, but also intensifies questions about concentration in compute, data access and market power.

In short, Google’s 2025 results are a validation of its ‘all‑in’ AI strategy — rapid model adoption, strong cloud momentum and an aggressive build‑out of physical infrastructure. The bet is that owning and optimising end‑to‑end compute will both lower costs and erect a durable barrier to entry. The challenge ahead is execution: deliver capacity at scale without degrading returns, navigate supply and energy constraints, and translate technical superiority into sustained, diversified revenue streams.

Share Article

Related Articles

📰
No related articles found