How Apple’s AI strategy is classic APPL

Sahil Sinha,appleapple intelligencestrategy
Cover Image

Earlier this year at WWDC, Apple finally discussed how they’re thinking about AI. Thus far, Apple had felt relatively quiet as the AI boom had been spreading through its peers. Microsoft had landed an early close partnership with OpenAI, Meta had released several small and large language models to the open source community, and Google had released Bard/Gemini.

I was curious what Apple had to say here. Apple is typically happy to sit back and let hype cycles come and go, before carefully deciding if/how to adopt it and apply the classic “Apple” effect.

How Apple is “APPL”ing AI

1. Wait, watch, perfect

Once again, Apple’s positioned themselves to learn from the early stages of an emerging technology. Apple announced ChatGPT would be one of the private, 3rd party models that Apple Intelligence would leverage for iPhone users. This lets Apple sit back and learn about the new interaction patterns, while OpenAI (and other private model developers) worries about making GPT’s.

By not jumping directly into the “developing foundational models” game, Apple gets to focus on the product side - understanding how to best productize LLMs for how humans use them. Arguably, this has always been one of Apple’s core strengths.

2. Leaning on privacy via vertical integration

Data privacy has been an inconvenient truth of this latest LLM-driven wave of AI. Consumers are worried about who is touching their data. Apple has taken advantage of their previous investments in compute, to give them a uniquely strong offering in the market when it comes to safeguarding consumer data.

For some context on Apple Intelligence:

  1. Apple has several models developed in-house, for certain AI powered features. Apple will still power most AI features in Apple products, but is happy to bring in 3rd party models when needed.
  2. Some of these models run locally on the iPhone, and some are offloaded to “private cloud compute” - servers owned by Apple, run on Apple Silicon, designed to run AI models in the cloud.

Apple can guarantee that for functions using models developed or run locally - no data ever left Apple. Everything from training the model, to using it for inference, was done in house with strong privacy policies. This is a unique offering that today, only Apple can provide consumers. Very very few companies have the necessary data, software and compute chops to do so. Using in-house silicon also limits Apple’s reliance on other players in the ecosystem such as NVIDIA - and any market forces that may impact the demand or price of compute.

Apple’s Future in AI: Owning Distribution:

From CNBC, ‘Apple’s vision for AI isn’t about one big model — it’s a slew of smaller models that don’t require the same amount of computing power and memory, running on Apple’s devices and chips themselves.’

To me, it looks like Apple is angling for an “app store” like position in AI economy. They want to focus on being where great models meet users. Similar to developing a few native mobile apps, Apple may develop their own small foundational models to power native Apple AI features. But I think broadly, Apple wants to own how foundational models get used by people. Similar to how companies list apps on the App store, to leverage Apple’s infrastructure for building and distributing apps, and give Apple a ton of data and revenue share in exchange.

Links:

  1. John Hwang's Apple's AI Strategy in a Nutshell (opens in a new tab)
  2. CNBC's Apple execs explain why its AI is different from competitors (opens in a new tab)
© lytix