Amidst all the announcements at the Apple Worldwide Developer Conference last week, we had articulated our views around one specific aspect of artificial intelligence (AI) – creating specific use cases that removes the hype around this fast-developing technology. The other one was using AI to create smarter solutions for users across Apple’s product line.
The Cupertino-based tech giant, which was way behind its rivals in its AI integration efforts, decided to supercharge its product lineup. That they christened it Apple Intelligence raised a few sniggers. But, the idea of delivering personalized AI services and solutions without compromising on data has caught the collective fancy of all proponents of AI, and more specifically Generative AI (Gen AI).
Besides being a major leap of faith from Team Apple, the WWDC announcements also put the rest of the clutter around AI in a different perspective. Why do we say so? In recent times, investors are clamouring to get their fingers into multiple AI pies in the hope that they can make money in at least some of them. And they’re innovating big time to do so.
More importantly, Apple’s measured approach to AI innovation and integration has brought in a new perspective to this Genie-in-the-bottle, not to mention the fact that they’ve thrown a spanner in the works of some rivals who brought forth half-baked solutions that resulted in a meme-fest and highlighted the need for red armies to assess these releases.
What makes Apple’s entire AI premise different from the rest is their vision of using private data to help AI to simplify our tasks with a device-first, cloud-later approach. As for the latter, the company has claimed that it has found a new way to handle sensitive data in the cloud, thus pushing Apple’s privacy narrative further.
So, what exactly does this mean? Apple says it will aim to fulfil AI tasks locally on the device first before any data exchange occurs with cloud services. And while doing so, the data will be encrypted before the exchange and then deleted. Sounds incredible? That’s why Apple has invited independent security researchers to verify its claims.
The new process is called Private Cloud Compute and Apple is taking pains to highlight how it is different from what its rivals such as Amazon, Meta and Google do in terms of collecting and storing data. Apple executives said personal data on the cloud would only be used for AI tasks and once done will not be retained or accessible to the company, even for QC.
Apple and Tim Cook appear clear in their minds that the quality of trust that users feel for the company and its processes and products would define their future. It is somewhat like one disclosing one’s most intimate details to a physician, a tax consultant or a legal advisor in the firm belief that such data would not leave their offices ever.
Apple wants its users to believe that the sensitive data that they share in terms of pictures, emails and messages would be used to deliver automated services without actually storing the data online, thus rendering it less vulnerable. Craig Federighi, the senior VP of software engineering says this would work in upcoming versions of iOS.
He shared an example of how your handheld device can recognize a postponed meeting from your calendar and prepare instead to attend your daughter’s performance with details about the performance automatically fetched by Siri, which can also predict local traffic and chart out a plan for the family time.
Of course, one direct benefit comes from Apple’s stated objective of not earning from ads that makes their hardware and services less addicted to data storage than other devices. But, now they are also taking care to not repeat data leaks from the iCloud in the past by avoiding cloud computing for AI tasks where possible.
As Federighi says, the idea is to use personal intelligence systems on the devices for most of the data processing which means AI models run on iPhones and Macs instead of the cloud. So, the process is aware of personal data but has no need to collect it on the cloud for future use. Once the instance is over, the data is scraped away.
Of course, technologists raise the question of massive computing power required even for simple AI models to work. So, how does Apple plan to have chips doing what large computers aren’t able to do? The company says its research with on-device computing is what makes this possible via the design of its M1 chips that rolled out late in 2020.
However, doubts persist as technologists are asking how Apple will handle requests that require more computing handled by Apple servers with personal data. This is where the Private Cloud Computer system kicks in, says Apple which has extended security and privacy of Apple devices into the cloud.
The company claimed that personal data would not be accessible to anyone other than the user (not even to Apple). The process limits data usage only for the task at hand, which suggests some sort of an encryption protocol where an iPhone or a Mac decides if it needs help of a larger AI model and more computing.
If the answer is yes, it will package a request containing the prompt it is using and the specific mode and then encrypt the request with only the specific AI model having the key to decrypt the message and do the needful. Of course, it remains to be seen if the device actually notifies the user that the data is being sent to the cloud for answers.
For now, security experts believe that based on Apple’s disclosures thus far, the system could be privacy focussed than any other AI products in the market today. Whether it works the way it is supposed to will become clear only after Apple Intelligence releases beta versions on the iPhone 15 and the new macOS Sequoia later this year. Of course, Tim Cook thinks that Apple Intelligence would become indispensable. Let’s wait and see!