Alphabet ended 2025 with a clear message to markets and rivals: scale compute or fall behind. In its quarterly earnings call the company said it will target $175–185 billion of capital expenditure in 2026, a near‑doubling of prior annual outlays, and framed that spending as a defensive and offensive necessity to support AI models, data centres and networking. Sundar Pichai and the finance team laid out where the money will go — roughly 60% to servers and 40% to data‑centre and network build‑out — and emphasised that more than half of their machine‑learning compute will be allocated to Google Cloud customers.
The financial scorecard was strong enough to justify the ambition: 2025 revenue surpassed $403 billion and Q4 consolidated revenues rose 18% year‑on‑year. Google Cloud posted a blistering 48% revenue increase in Q4, hitting $17.7 billion for the quarter and an annualised run‑rate north of $70 billion. Search grew 17% to $63.1 billion for the quarter, YouTube annual revenue topped $60 billion, and consumer subscriptions exceed 325 million paying users.
Technical metrics were trotted out as proof that the investment is working. Gemini 3 — Alphabet’s flagship multimodal model — became the fastest‑adopted model in the company’s history. The Gemini app now claims 750 million monthly active users and Gemini Enterprise has sold over eight million paid seats. The company says token throughput for its models exceeds 10 billion tokens per minute and that unit model service costs have fallen 78% through optimisation and efficiency measures.
Pichai used the call to rebut a growing industry thesis that large models will hollow out SaaS vendors’ pricing power. He argued Gemini is an enabling technology for software companies, not a substitute: many top SaaS firms are deepening integrations, using Gemini to improve product experience, automate workflows and drive growth rather than surrendering commercial leverage. Google also reiterated that its in‑house TPU accelerators are a cloud differentiator—not a stand‑alone product for external sale—and that the company is prioritising end‑to‑end efficiency rather than commoditising hardware.
The call also exposed raw operational constraints. Pichai named compute bottlenecks — electricity, local area networking and supply‑chain limits — as the company’s principal near‑term worry. The firm acknowledged a multi‑year lag between ordering hardware and seeing usable capacity, which helps explain the urgency and scale of the 2026 capex plan. Alphabet stressed that investments are not simply wasteful spending but necessary to meet surging internal and external demand for training and inference.
Beyond core cloud and models, Alphabet presented a broad commercialisation agenda. Google is embedding Gemini across Search, Workspace and YouTube, testing ad formats in the new AI search experiences (including “Direct Offers” in AI responses), and promoting a nascent open standard — the Universal Commerce Protocol — to enable agentic commerce and seamless transactions. Waymo continued to expand services, surpassing 20 million rides and entering new markets, while a strategic cloud partnership with Apple to co‑develop a base model was announced as a notable win for Google Cloud’s distribution story.
For investors the headline figure will dominate: capex guidance of $175–185 billion for 2026 raised alarms and had the market reacting sharply after hours. Yet management pushed back, pointing to robust free cash flow in 2025 ($73.3 billion for the year), high operating margins and a disciplined capital allocation framework. CFO Anat Ashkenazi emphasised rigorous investment approvals and internal efficiency programmes — including heavy use of AI to generate code and automate operations — to offset the rising depreciation and operating costs that accompany new data centres.
The policy and industrial implications are wider than Alphabet’s balance sheet. A near‑doubling of hyperscaler capex to build AI compute will accelerate demand for GPUs, advanced packaging, power and cooling solutions and long‑lead infrastructure components. That creates opportunities for chipmakers, data‑centre builders and energy providers, while worsening supply tensions in the near term. Regulators and competitors will be watching closely; such scale reinforces Alphabet’s position in cloud and AI, but also intensifies questions about concentration in compute, data access and market power.
In short, Google’s 2025 results are a validation of its ‘all‑in’ AI strategy — rapid model adoption, strong cloud momentum and an aggressive build‑out of physical infrastructure. The bet is that owning and optimising end‑to‑end compute will both lower costs and erect a durable barrier to entry. The challenge ahead is execution: deliver capacity at scale without degrading returns, navigate supply and energy constraints, and translate technical superiority into sustained, diversified revenue streams.
