Apple (AAPL.US) vows to revolutionize Apple Intelligence! To launch “device-side user data analysis” to upgrade AI technology

Zhitongcaijing · 04/15 00:41

The Zhitong Finance App learned that Apple product whistleblower Mark Gurman (Mark Gurman), who has revealed the details of iPhone updates accurately and in advance several times, recently published an article stating that the US consumer electronics giant Apple (AAPL.US) will soon begin analyzing “customer device-side data” to improve its iPhone and other consumer electronics artificial intelligence platforms — the so-called Apple Intelligence (Apple Intelligence) AI module. This latest exclusive reading measure aims to protect the security of user information while helping the company Catch up with AI smartphones and other end-side AI devices, and even competitors in the field of generative AI.

Currently, Apple usually uses synthetic data to train large AI models — this kind of information is generated by simulating real scene input without any personal data details. However, Gulman said that according to media reports citing information revealed by people familiar with the matter, synthetic data does not always accurately reflect actual user operation activities, which seriously limits the operating effectiveness of its AI system. The Apple software and hardware ecosystem and the enormous amount of exclusive Apple user data can be described as Apple's strongest moat, and it is also expected to be the core driving force for Apple Intelligence to lead the end-side artificial intelligence system in the future.

Synthetic data refers to data that is not artificially created and simulates the real world. It uses extremely complex professional statistical methods or uses AI big model technology such as deep learning and generative AI to simulate and generate. So-called synthetic data can also be presented in the form of multimedia, tables, or text. Organizations across industries can use it for research, testing, new development, and machine learning research. The core purpose of the synthetic data model is to solve the burgeoning demand for higher quality real-world training data from developers in the AI field, such as OpenAI, where large models in the AI field are being rapidly updated and iterated.

Gulman said that the latest method will solve this problem that Apple's device-side AI system has faced for a long time, while ensuring that user data is always stored on the device side, and that it will not be directly used by the Apple research team for large-scale AI model training in the cloud. The move is aimed at helping Apple close the generative AI capabilities of other brands of AI smartphones, and the generative AI functionality gap with AI rivals such as OpenAI, Meta, and Google's parent company Alphabet — the privacy restrictions these companies face are relatively relaxed.

Gurman said that the operating principle of Apple's latest AI technology is as follows: the synthetic data automatically created by Apple is compared in detail with actual email samples from recent users in the iPhone, iPad, and Mac email apps. Through real email inspection and verification of simulated input, Apple can determine which specific content in the synthetic data set best matches the actual customer application scenario.

According to our understanding, these latest insights and insights will greatly optimize the functions closely related to text in the Apple Intelligence (Apple Intelligence) function module, including detailed arrangement and summary notification summaries, the ability to synthesize integrated ideas and thought frameworks in “innovative writing tools”, and the ability to summarize and summarize user messages.

“When creating synthetic data, our goal is to generate simulated statements or emails whose subject or style is more close to users' real scenarios, to improve the specific performance of the summary model while avoiding direct collection of users' actual emails from the device side.” The company's research team wrote in a machine learning blog published Monday.

The slow iteration of Siri updates can be described as continuing to suppress the enthusiasm for iPhone upgrades

Large language models, known as LLM, are the core technology for modern generative AI applications and the foundation of Apple Intelligence functionality. In addition to synthesizing data, Apple also trains its large AI models through third-party authorization information and scanning and open networks to scrape data.

Recently, many AI research teams have pointed out that excessive reliance on synthetic data has huge flaws. For example, actual generative AI tools misread notification information and are unable to provide accurate text summaries under certain circumstances. These are also the core influencing factors that Apple's Apple Intelligence function has been criticized by some users since its introduction.

Theoretically, after real-world verification and detailed checks, the new system can greatly improve the overall performance of Apple Intelligence, which will also be a key step for Apple Intelligence to become a strong competitor in the AI field. It is worth noting that the generative AI products launched by Apple's artificial intelligence team have long lagged behind AI smartphone competitors and competitors in the field of AI big models such as OpenAI. Recently, Apple has even restructured the business management of digital Siri voice assistants based on AI functions and related software.

The launch of Apple's key new feature in the so-called Apple Intelligence (Apple Intelligence) module — a new generation of more advanced Siri digital voice assistants driven by AI models — has been drastically delayed, and Wall Street investors generally expect it to have a significant impact on the Apple iPhone upgrade cycle. Furthermore, the huge tariff costs caused by the new round of trade between China and the US will also have a serious negative impact on Apple's profits. These two major factors can be described as the core factors that Wall Street investment institutions such as Morgan Stanley have generally drastically lowered Apple's target share price recently, and that Apple's stock price has dropped nearly 10% since April.

Morgan Stanley pointed out in the research report that delaying the launch of the upgraded version of the Siri digital voice assistant will directly affect the slope of the future iPhone sales growth curve and suppress the upgrade cycle, which in turn will have a negative impact on the company's performance; at the same time, the pressure on tariff costs will also have a negative impact on Apple's profit margin and profitability. Morgan Stanley gave Apple an “increase in wealth” rating and a target share price of $220 (a sharp drop of $275 from before). As of the US stock close on Monday, Apple's stock price closed at $202.520.

By fully integrating the big model of artificial intelligence with consumer electronics terminals such as PCs and smartphones to create a “end-side AI” that can run offline on local devices with more powerful inference performance, and can also use huge cloud AI computing power resources to meet users' deeper personal needs, it has become the core content of the “AI planning blueprint” of many technology companies around the world.

In the fruit fans' idea of updating and iterating Siri, with the support of cloud and end-side AI models, Apple Siri's positioning may no longer be an awkward formal voice assistant. By combining cloud-based AI computing power resources and endside generative artificial intelligence functions, the Apple iPhone model is expected to achieve a “personal AI assistant” that better meets the individual needs of users, similar to the “all-round AI partner” in the movie “HER.” Apple has stated that the updated and iterated Siri voice assistant will be able to use users' personal information to answer questions and perform operations in various applications.

According to information, according to smartphone survey data from AlphaWise, a subsidiary of Morgan Stanley, Apple's new generation of more advanced Siri digital assistants is widely regarded by consumers as the “number one AI application” to drive iPhone upgrades. According to AlphaWise's survey results, “getting more advanced AI features” is one of the top five driving forces for the smartphone generation for the first time. Among them, the “upgraded Siri digital assistant” is the most interesting Apple Intelligence function among potential iPhone 16 buyers around the world, and its importance exceeds other AI features such as image-related optimization and ChatGPT integration.

1741855209 (1) .png

The AlphaWise survey results also showed that about 50% of existing iPhone users admitted when deciding not to upgrade to iPhone 16 that the delay in the release of some important features of the Apple Intelligence module had a negative impact on their decision. The Morgan Stanley analysis team predicts that in the absence of “killer AI apps,” the iPhone upgrade speed is not expected to be as significant as previously anticipated.

When will AI systems be introduced after updates based on real data?

According to the timeline revealed by Gulman in his latest post, Apple Intelligence's new AI system after real-world data verification, improvements, and iterations will be launched in stages along with the beta versions of iOS 18.5, iPadOS 18.5, and macOS 15.5. Earlier on Monday EST, the second beta version of the upcoming AI system was made available to some developers.

Gurman also said that in addition to adopting the potential functional improvement measures mentioned above, the consumer electronics giant is currently exploring the use of privacy-centered privacy priority solutions to improve models that support other Apple Intelligence functions, including popular core functions of Apple Intelligence such as “image playground,” “image magic wand,” “memory creation,” and “visual intelligence.”

In the Genmoji (custom emoticon generation) function, Apple has used differential privacy technology. The company said in its blog that the system “can accurately identify commonly used instruction patterns while ensuring that unique or rare user-exclusive instructions are not obtained through mathematical methods.” This latest idea can be described as tracking the AI model's precise customized response to the same request from multiple users (for example, to generate a “dinosaur carrying a briefcase”) and optimizing the actual results of such application scenarios.

Gurman said that Apple's features are only open to iPhone and other end users who choose to join the “Device Analysis and Product Improvement” program, and related options can be accurately managed in the “Privacy and Security” tab of the device's “Settings” app.

“Based on our years of experience in technologies such as differentiated privacy, and new methods such as synthetic data generation, we can continue to improve Apple Intelligence functions and protect user privacy on the premise that users join the analysis program.” Apple said in a blog post.

Apple's artificial intelligence team has continued to be in turmoil for several months. Gulman wrote earlier about the business unit's technical difficulties, leadership issues, major Siri product delays, and executive changes.

In March, Apple completely restructured part of the AI team's management structure and handed over the Siri business from executive John Giannandrea to Vision Pro creator Mike Rockwell and software business head Craig Federighi. Apple plans to announce a major upgrade to Apple Intelligence in June, but long-awaited features such as AI-based Siri Voice will not be launched next year.