Apple just wrote a $1 billion-per-year check to Google. Not for ads revenue, not for search placement — for AI. Specifically, to power a completely rearchitected Siri using Google’s Gemini models running on Apple’s Private Cloud Compute infrastructure.
This is internally called Project Campos, and it represents the most significant shift in Apple’s AI strategy since Siri launched in 2011. For developers building on Apple’s platform, this isn’t background noise. It’s a preview of the APIs you’ll be calling in 12 months.
What Actually Changed (Beyond the Headline)
Apple’s original Apple Intelligence launch in late 2024 was built on Apple Foundation Models — relatively small, on-device models in the 3–7 billion parameter range. Capable for simple tasks, but clearly underpowered compared to frontier models. The gap was visible in real use: complex prompts returned cautious, hedged responses; multi-step tasks frequently failed.
Project Campos doesn’t throw that away — it layers on top. The architecture now looks like this:
- On-device models (~3B parameters): Handle simple, latency-sensitive tasks. Autocorrect, quick summaries, local queries. Still private, still fast.
- Private Cloud Compute: Apple’s custom server infrastructure. User data is encrypted end-to-end and Apple claims it cannot access the contents.
- Gemini 3.1 custom model (~1.2T parameters): Handles complex reasoning, multi-step planning, cross-app orchestration. This is the engine under the hood of the new agentic Siri.
The key promise Apple is making: Gemini runs in an isolated environment on Apple’s infrastructure. No Google logging. No data sharing. The “white-label” arrangement means users see Siri, not Gemini — and theoretically, the privacy guarantees Apple has built remain intact.
I’m cautiously optimistic about this but want to see independent security audits before trusting it with sensitive workflows.
WWDC 2026: What to Expect on June 8
iOS 26.5 beta (released March 30) contains no Gemini-powered features. The signal from Apple’s internal teams is clear: everything is being held for WWDC 2026, and the full rollout targets iOS 27.
As a developer, here’s what I’m watching for:
1. The SiriKit Evolution Current SiriKit Intents are limited and brittle. With a model capable of reasoning, Apple should be able to expose a much more flexible API — describe what your app can do in natural language, and let the model handle intent extraction. This would be a massive upgrade.
2. On-Screen Context APIs One of Project Campos’ headline features is “on-screen awareness” — Siri understanding what the user is looking at and acting on it. For developers, this likely means new APIs to expose structured context from your app to the system AI layer. Think accessibility metadata, but richer and bidirectional.
3. App Actions Registry Rumors suggest Apple is building an App Actions Registry — a system-level catalog of things each app can do, queryable by Siri. If this ships, it’s the most developer-relevant change since SwiftUI. You’ll want your app listed and well-described from day one.
The Privacy Tradeoff I’m Still Thinking About
Apple’s privacy marketing is excellent, and Private Cloud Compute is a genuinely thoughtful technical design. But let me be honest about the tradeoff:
Before Project Campos, sensitive queries stayed on-device. Now, complex tasks route to Gemini on Apple’s servers. Apple says the servers use hardware attestation and the system is verifiable — security researchers can verify the software stack running on PCC nodes.
The practical risk isn’t Apple or Google being malicious. It’s the attack surface. Two companies instead of one, a more complex network path, and a model with 1.2 trillion parameters whose internal behavior is opaque. For enterprise apps handling sensitive data, you’ll need to think carefully about which user interactions might trigger cloud routing.
My recommendation: when WWDC ships documentation, look specifically for APIs that let your app declare which operations must remain on-device. If Apple doesn’t provide this control, lobby hard for it.
What This Means for AI-First App Development
The most interesting question isn’t Siri — it’s what this means for developers building AI features into their own apps.
Apple has consistently pushed developers toward Apple Foundation Models for on-device inference. With Gemini powering the system layer, there’s a logical future where system-level AI and app-level AI share context. Imagine your app’s AI assistant knowing what the user was just doing in Calendar, Mail, or Maps — not through explicit API calls, but through the system intelligence layer.
This is what Google has been building toward with Gemini’s role across Android. Apple is now, belatedly but seriously, building the same thing.
The practical playbook for developers right now:
// Today: explicitly call Apple Foundation Models
let session = LanguageModelSession()
let result = try await session.respond(to: "Summarize this text")
// WWDC 2026 (expected): register app capabilities for system-level Siri
// AppCapabilityRegistry.register(
// capability: "create_invoice",
// description: "Creates a new invoice for a customer",
// parameters: ["customer_name", "amount", "due_date"]
// )
The second pattern is speculative, but it’s the direction Apple needs to go for this to be more than a chatbot upgrade.
My Take: Delayed But Directionally Right
Apple got beat badly on AI in 2024–2025. The company that invented the modern smartphone assistant was lapped by OpenAI, Google, and Anthropic. Project Campos is the acknowledgment of that reality and a pragmatic response: use the best model available while building toward long-term independence.
For developers, the window to prepare is now. Start understanding how your app exposes its capabilities to external systems. Read up on App Intents (available today) — the future App Actions Registry will almost certainly build on that foundation.
WWDC 2026 on June 8 is the event that matters this year. Clear your calendar.
Sources: 9to5Mac — Project Campos, MacRumors — Gemini beyond Siri, 9to5Mac — iOS 26.5 beta