Apple to Recast Siri as a System‑Level Chatbot — Powered by Google’s Gemini Under the Hood

Apple will relaunch Siri as a system‑level, conversational AI called "Campos," to be unveiled at WWDC in June and shipped with iOS 27 in September. The assistant will be multimodal and deeply integrated with system apps, but its underlying model will be a customised Google Gemini running on Google Cloud and TPUs, a pragmatic deal that raises privacy, competition and strategic‑dependency questions.

An adult using a laptop indoors, browsing Google at a wooden table with coffee.

Key Takeaways

  • 1Apple will convert Siri into a system‑level conversational AI (project 'Campos'), debuting at WWDC in June and shipping with iOS 27 in September.
  • 2The new assistant will support voice and text, multi‑turn dialogue, screen analysis, device control and deep integration with Mail, Music, Photos and other apps.
  • 3Core model infrastructure will be a customised Google Gemini running on Google Cloud and TPU hardware under a multi‑year agreement.
  • 4Apple designs the architecture to allow future swapping of the base model and intends to balance cloud inference with local processing and permission controls.
  • 5The partnership accelerates Apple’s AI roadmap but creates commercial, privacy and regulatory trade‑offs by relying on a direct competitor for foundational AI services.

Editor's
Desk

Strategic Analysis

Apple’s choice to adopt Google’s Gemini as the base for a system‑level Siri is a watershed moment: it signals a pragmatic pivot from rigid self‑reliance toward partnerships that buy speed. For the company, time matters — rivals have surged with convincing generative AI experiences and investor sentiment has been sensitive to momentum. But dependence on Google’s cloud and TPUs hands bargaining power to a competitor and complicates Apple’s long‑standing privacy narrative. The company’s mitigation — a swappable model architecture and enhanced local processing — is sensible but not a silver bullet. Regulators and enterprise customers will scrutinise data paths and contractual terms, while competitors will exploit any perceived erosion of Apple’s control over its stack. Longer term, Apple will need to decide whether to double down on external partners, pursue a hybrid strategy combining best‑of‑breed vendors, or invest heavily to rebuild an internal foundation model capability; each path carries strategic risks and costs.

China Daily Brief Editorial
Strategic Insight
China Daily Brief

Apple is preparing the most radical reinvention of Siri since the assistant’s 2011 debut: a system‑level, conversational AI that will act more like ChatGPT or Google’s Gemini than the short‑command helper users have known for 15 years. The redesign, under the internal codename "Campos," is slated for a public reveal at WWDC in June and a broad rollout with iOS 27 and iPadOS 27 this September.

The new Siri will operate as a platform component rather than a standalone app. It will accept both voice and text, hold continuous multi‑turn conversations, analyse on‑screen content and open windows, control device functions and access personal data such as calendars, messages and files to perform compound tasks. Apple plans tighter integration with Mail, Music, Photos and other core apps and will add content generation, image synthesis, summarisation, file analysis and web search capabilities.

Crucially, the assistant’s base intelligence will not be an Apple‑built large language model. Instead, Apple intends to use a customised version of Google’s Gemini family running on Google Cloud and Google’s TPU infrastructure. The move follows a multi‑year agreement between the two companies signed in January, and represents a decisive, pragmatic turn away from an exclusively in‑house AI stack.

That dependency is mitigated by architectural design choices. Apple is reportedly building the assistant so its underlying model can be swapped, preserving the option to use other providers or its own future models. The company also plans to maintain a focus on privacy through local processing where possible and tighter permission controls for system‑level data access.

The collaboration answers a pressing commercial and technical reality. Apple has lagged behind rivals in generative AI capabilities and faces pressure to deliver a visibly competitive product while its market valuation has recently been shadowed by Alphabet. Tapping Google’s models lets Apple accelerate the launch of a more capable assistant without the years and billions it would take to train comparable foundation models in‑house.

The trade‑offs are palpable. Relying on Google for core AI services risks ceding leverage to a competitor that already dominates search and cloud infrastructure. It also raises fresh privacy and regulatory questions: how user data will be routed, what is processed locally versus in Google's cloud, and how Apple will remain compliant with its own privacy promises when backbone model inference occurs on third‑party infrastructure.

From a product perspective, the new Siri could materially change how people interact with iPhones and iPads, shifting expectations from single‑shot voice commands to sustained conversational workflows that orchestrate apps and personal data. For developers and competitors, the system‑level integration will create new opportunities and challenges around APIs, extensions and ecosystem control.

What to watch next: the WWDC demo for specifics on limits and safeguards, the details of the cloud‑local split for sensitive data, pricing and performance trade‑offs for end users, and whether regulators raise concerns about cross‑company data flows. The Campos project is a gamble on speed and partnership that, if executed well, could reposition Apple in the AI era — but it also binds parts of Apple’s AI future to a chief rival.

Share Article

Related Articles

📰
No related articles found