Apple’s long‑promised overhaul of Siri has hit another snag in internal tests, forcing the company to split and delay several AI features originally slated for a March iOS release. Engineers reported stability and accuracy shortfalls, slow response times, and a tendency for the assistant to interrupt users speaking quickly. The company now plans to disperse the planned feature bundle across multiple iOS updates, with some elements slipping to May and others possibly not arriving until September.
The trouble marks the second major postponement since Apple unveiled the Siri upgrade at WWDC 2024. That announcement committed Siri to run atop Apple Foundations Models, the company’s in‑house large language model platform, and initially targeted an early‑2025 debut. Apple's internal timeline was pushed to 2026 when its self‑developed models underperformed, and executives had been pressing for a March launch with iOS 26.4 to avoid further slips.
Testers identified several concrete issues: incorrect query handling, inconsistent accuracy, excessive latency and, in some cases, the assistant invoking an OpenAI ChatGPT integration instead of Apple’s own stack. The latter error is especially awkward because it highlights both a technical routing problem and the broader dilemma of mixing third‑party AI with Apple’s privacy‑first positioning.
Two flagship capabilities appear most vulnerable to delay. The first expands Siri’s ability to access personal data—allowing spoken queries to surface old messages, play podcasts shared by contacts and otherwise search user content. The second enables advanced cross‑app voice commands, such as locating and editing a photo and sending it to a contact in one instruction. Both features are partially functional in beta but fail to meet Apple’s stability standards.
Privacy requirements are a major development drag. Apple engineers and managers, led in part by software chief Craig Federighi, have repeatedly stressed that personalized AI must not expose user data. Those constraints oblige additional engineering and testing work, slowing feature parity with rivals that rely more heavily on cloud processing and third‑party models.
Not all work has stalled. Two less publicized tools—a web search utility similar to Perplexity and a customizable image generator derived from Image Playground—have surfaced in iOS 26.4 and 26.5 betas and look likelier to ship on schedule. At the same time, Apple is already planning a broader AI initiative codenamed "Campo" tied to iOS 27, which envisions a conversational, system‑level assistant woven into mail, calendar and other core apps, and drawing on third‑party models such as Google’s Gemini.
The delays also intersect with Apple’s hardware strategy. CEO Tim Cook has hinted at new data‑center chips to boost on‑device and cloud AI processing, a reminder that software deadlines depend on underlying compute capacity. For Apple, the twin pressures of delivering market‑leading AI features and preserving its privacy brand are colliding with the hard realities of training and integrating competitive large language models.
For consumers and competitors, the immediate consequence is pragmatic: fewer headline AI features in the next iOS update and a more incremental, staged introduction of functionality. For Apple, the episode exposes a wider strategic trade‑off between self‑reliance in model development and the temptation to bolt on external models for speed. How Apple resolves that trade‑off will shape whether Siri’s long‑promised renaissance enhances iPhone differentiation or becomes a stalled initiative that hands momentum to more aggressive AI players.
