It is Mark Gurman who mentions that Apple approach should bring the competent differentiating sign post in race of AI development, with the main target on securing user privacy. While for instance, Google, Microsoft, and OpenAI are well-known for their achievements in language models, which make possible the conversational AI and write up articles, Apple has been mostly coy about its developments in this area.
At least publicly. However, this is a form of future side-show: AppleGPT is already intelligent AI technology that is liked a core for many of its vital applications and services.
Indeed, following the path consistent with Apple’s established way of doing things, the integration should be unique and possibly revolutionary. According to Gurman’s story on Bloomberg, “Apple built its own large language model which is the core of the computational functions, even though this model integrates with the device. What is confounding is that all indications are that this is multipurpose tech driven by the processor inside the phone, instead of in the cloud.”
A combination of both responsiveness and privacy and security factors emerge as key attributes of AI models run locally on the device with data not sent off to a server, thereby avoiding the latency that would result from passing requests to a remote server. Being able to skip the cloud-based AI challenge of their competitors, Apple supplies the data and the processing power for its on-device machine learning models. Security of the user data is guaranteed because it is never taken outside the iPhone and the possibility of exposure is also avoided.
Probably, the downside is about the less powerful AI in the beginning, compared to the large model architectures used in the final output by Google and others using powerful hardware. Though Gurman is on top of things and indicates that “Apple plans to demonstrate how technology can help people in their lives” or not by the benchmarking process directly.
I consider that one already wonders whether these AI-powered features could be integrated into the iOS 18 of September of this year. The report implies that an essay and presentation auto-summarization with completing the process of documentation using language models which are based on brain intelligent writing suggestions together. The report as well drops a hint at AI-powered functions in this field as well like creating a playlist on Apple Music. The song spell may take you back to the mod-addition “Spotify”, which brings a similar-sounding feature to us.
Without the doubt, the most intriguing feature could be a new AI-enabled knowledge system, after Siri moved along without any significant increase in knowledge level for a couple of decades. The integration of other AI capabilities like the large natural language models, content-related tasks and audio, will enable us to give you a completely new Siri experience.
On the other side of the coin, AppleCare developers and support staffholders could, likewise, benefit in various ways from AI upgrades, including better code completion and workflow validation systems, as well as improved customer support capabilities.
Apple is also building on its great ambition. The new research papers on multi-modal language models have proved to be a new area; they encompass text, images, and other media formats analysis. Among many future projects that target the speech based document, image and video editing, one option may enable users simply to say sentences for editing.
Certainly, concerns will arise about whether or not Apple will be able to maintain reliability and resilience in their AI without selective feedback. Typically, the current Apple approach is inclined to put the most glossy user interface experience ahead of everything else and therefore ease of use may be the first priority for the beta testing stage. Anyway, we will know better what the new iOS 18 version will bring to us when the conference takes place at WWDC 2024 on June 10.