Apple's hidden AI partner: the company heavily relies on Anthropic's Claude internally
Anthropic wanted several billion per year, so Apple went with Google for Siri
by Skye Jacobs · TechSpotServing tech enthusiasts for over 25 years.
TechSpot means tech analysis and advice you can trust.
Rumor mill: Apple's public embrace of Google's Gemini might be steering the next generation of AI features on iPhones and Macs, but behind the scenes, the company is running on a completely different foundation. According to Bloomberg's Mark Gurman, Apple's engineers and product teams are leaning heavily on Anthropic's technology for internal development work.
Speaking on podcast TBPN, Gurman said Apple "runs on Anthropic at this point," describing custom versions of Claude, Anthropic's flagship large language model, running on Apple's own servers.
Anthropic's models are powering "a lot" of Apple's internal product development and tooling, he said, with the company choosing to host these customized Claude instances on infrastructure it controls rather than relying on a generic external service. That approach lets Apple keep sensitive internal data locked down on its own systems while still tapping a third-party frontier model for day-to-day engineering work.
On the consumer side, Apple has been assembling what amounts to a multi-tier AI stack for Siri and Apple Intelligence. At the top sits a reported 1.2 trillion-parameter custom Gemini model from Google – an order of magnitude larger than Apple's own roughly 150 billion-parameter server-side foundation model that powers Apple Intelligence features.
Below that, Apple runs a much smaller on-device model built to work efficiently on Apple silicon, handling latency-sensitive and privacy-critical tasks locally. The general idea: simpler requests get processed on the device, while more complex queries get routed to larger models running in Apple-operated data centers, balancing capability against cost and performance.
// Related Stories
- Apple couldn't build the best AI for Siri, so it's borrowing Google's
- This $24 lifetime AI deal unlocks ChatGPT, Grok, Google AI and more
Gurman's comments about "custom versions of Claude" suggest a parallel track focused on Apple's internal development environment rather than on consumer Siri responses. By deploying Claude on its own servers, Apple can integrate Anthropic's models into internal tools without exposing unreleased products or code to an outside provider's infrastructure.
All of this fits into Apple's broader Private Cloud Compute strategy for cloud-based AI. Apple has laid out a system where sensitive user requests get routed to hardened, verified servers running specific model builds, with strict access controls and data deletion guarantees after processing.
Within that framework, Apple can mix its own models with external systems like Google's Gemini – and potentially others – under a unified privacy and security layer, even if it hasn't publicly detailed exactly how each provider plugs in.
The business context helps explain how this architecture came together. Apple has confirmed that Gemini will power new Siri features, and multiple reports indicate the company will pay around $1 billion per year for access to a custom Gemini model. Separate reporting suggests Apple previously explored a deal with Anthropic to rebuild Siri around Claude, but those talks fell apart after Anthropic asked for "several billion dollars" annually, with fees expected to climb over time.
Put it all together, and you get an Apple that's publicly partnering with Google for the next wave of Siri upgrades while quietly depending on Anthropic's Claude for much of the internal tooling its engineers rely on, reflecting a hybrid AI strategy that spans multiple vendors and model tiers, but less of their own... intelligence.