A humanoid robot in a business suit contemplates a red apple missing a bite, while standing in an empty city office.
Apple's thoughtful approach to generative AI offers a powerful, user-centric vision of the technology's future.

At Apple’s WWDC 2024 event this week, the company unveiled the latest integration of AI into their products. While this “Apple Intelligence” (oof) isn’t going to transform the business world, it paints a unique and intriguing vision about the potential future of generative AI in every day life.

Let’s take a look!

AI As First Class Citizen

A major question in the world of Generative AI has been “how will AI’s interact with the computer systems of tomorrow?” In a human-driven world, the common way is through graphical or text-based interfaces – typing commands, clicking buttons, and so on. But AI is already in the computer – there should be computer-native ways for it to interact with programs that don’t require screen real estate. API’s have been the historical approach to this, but they vary significantly from program to program, and AI’s don’t know how to use them out of the box.

Apple just offered a first look at what the future may hold – through Siri.

Apple has offered the Siri assistant on their devices since 2011, so it’s no surprise that generative AI is powering a new, improved version of Siri on Apple devices.

Old School Siri used to be fairly limited – answering simple questions, setting timers or calendar invites on first-party apps, and that was about it.

GenAI Siri, though, has a larger view of things. Powered by an LLM running locally on your device, Siri can do all of the same old things and then some. New Siri can plan and execute work across your apps using the App Intents feature, providing a native set of interfaces for AI-driven activities separate and equal to user driven activities. If you ask new Siri to plan a trip for you, it will look search the web, book events on your calendar, and send you friends an email – all without requiring you to open or switch between apps.

It’s a realization of Siri as an assistant in the full meaning of the word – a virtual helper to aid you with your life, and much, much more than just a voice interface.

Local AI as Generalist and Specialist

Open source generative AI was aggressively ported to a range of computers last year, and Apple devices based on Apple Silicon chips were early and clear performance winners. For years, Apple has been developing their chips specifically to handle AI-type applications. When generative AI exploded onto the scene in late 2022, they were ready, and their system performance stood head and shoulders all other consumer-level hardware available.

So, it comes as no surprise that a centerpiece of their AI strategy is running an LLM directly on device. Small, open-source LLM’s have been competitive with the best closed models for a while now, and Apple’s on-device LLM is no exception. Like OpenAI’s ChatGPT or Google’s Gemini, it can answer questions, draft writing, and much more.

What’s special about Apple’s approach is that they’re allowing apps to augment that LLM’s intelligence. Using a technique called LoRA – which trains a small, specialized LLM alongside the bigger, general-use base LLM – Apple apps can now enable specific functionality unique to that app that wouldn’t have been possible with a single, monolithic AI model. It’s a smart approach to adding functionality to resource-sensitive devices without having to fill memory with a bunch of huge and differentiated AI models.

Private AI, Both Local and Remote

As powerful as on-device AI can be, though, Apple acknowledges that it isn’t right for every workflow – more complicated or computation-intensive activities need bigger Ai. So, along with it’s local LLM, Apple Intelligence allows for remote LLM’s (at this time, ChatGPT) to offer additional functionality.

Apple has long been a company that emphasized user security and privacy, in stark contrast to other Big Tech firms. Sending user data to a third-party presents many opportunities for security risks, and Apple spent an important portion of WWDC ’24 walking through the steps they were taking to maintain security and privacy while still offering advanced AI capabilities. This included anonymizing some user information, ensuring no data was stored after the interaction, multiple layers of security on the remote servers to protect user data, and transparency support for security researchers. This is a great step forward in user privacy expectations for AI; Google and Microsoft have been transparent that user data will be utilized for training, and OpenAI offer tiered privacy guarantees through their offerings. This respect for users will help people realize the full benefits of AI they might otherwise miss due to privacy concerns today.

Apple’s Vision of the Generative AI Future

The direct benefits of Apple Intelligence will be limited for business in the near future. While it may help some sales people coordinate meetings more efficiently or support math education with the iPad’s new calculator app functionality, no truly transformative technology stood out this time.

What did stand-out, though, was Apple’s vision of generative AI seamlessly integrated into our future technology. In their view, private, on-device AI leads the way, connecting apps through first-class interfaces, enhanced with app-specific knowledge, and using high-powered remote AI only when necessary – and only in the most private way possible. It’s an approach that puts users first while still empowering AI to approach its potential as both an assistant and enhancement to human capability.

Let’s hope other technology firms follow their lead.