Will the "revolution" of AI applications be in Apple's next big model?

Wallstreetcn
2025.11.11 08:12
portai
I'm PortAI, I can summarize articles.

Nomura stated that the latest media reports and Apple's papers show that Apple's AI strategy focuses on the "edge-cloud collaborative" intelligent body framework, planning to use Google's 12 trillion parameter cloud model as the "high-level reasoning brain" to command five specialized intelligent bodies running on devices to work together. This architecture achieves personalized task processing through the CAMPHOR model, protecting privacy while efficiently accessing personal data. This revolutionary model will mark the entry of edge-side AI into a large-scale application phase, expected to ignite a new round of hardware upgrade cycles starting in 2026, reshaping the AI application ecosystem

Apple's AI strategy is becoming increasingly clear, with its core not merely chasing larger language models, but rather building a revolutionary "edge-cloud collaborative" agent framework.

On November 10th, according to Hard AI news, Nomura stated in its latest research report that recent intelligence suggests Apple may use a powerful cloud-based large model (rumored to be Google's 1.2 trillion parameter model) as a "high-order reasoning brain" to command multiple specialized "edge agents" running on devices that can access users' personal data.

Nomura indicated that this hybrid architecture aims to address the core pain points of current AI applications: how to securely and efficiently utilize users' personal data while leveraging the powerful computing power of the cloud. The firm emphasized that the "collaborative agent model" envisioned by Apple is revolutionary, with task complexity and practicality far exceeding any single large language model (LLM) currently available.

Analysts believe that if this strategy succeeds, it will mark the true entry of "edge AI" into large-scale practical application, with significance far beyond existing capabilities.

It can perform highly personalized, context-aware complex tasks that pure cloud LLMs cannot achieve. This could not only ignite a new round of hardware upgrade cycles starting in 2026 (benefiting higher-performance processors, memory, and wireless communication technologies) but also reshape the AI application ecosystem.

Cloud Brain + Terminal Agents: Apple's Integration of Virtual and Physical

The research report states that, according to a Bloomberg report on November 6, 2025, Apple plans to adopt Google's 1.2 trillion parameter large language model in its cloud services. Although this news has not been fully confirmed, it aligns closely with the technological path previously disclosed by Apple.

Nomura stated that Apple's strategy is not simply to purchase a "brain," but to integrate it into a larger "collaborative agent model" framework.

The core of this framework is "edge-cloud integration." The cloud's super large model acts as a "high-order reasoning agent," responsible for understanding complex instructions issued by users. The actual executors are a series of "edge agents" running locally on devices like the iPhone. After the high-order agent parses the instructions, it assigns tasks to various edge agents. This architecture greatly saves computing resources and memory bandwidth, as the instructions passed to the edge agents are compressed data rather than large raw computations.

More critically, Apple has designed an offline backup solution for this architecture: when handling simple queries or when devices are offline, a "simple reasoning agent" running on the device can replace the cloud brain, ensuring the availability of basic functions.

Five Agents Collaborating: How the CAMPHOR Model Disrupts User Experience

Nomura stated that a recent paper published by Apple titled "CAMPHOR: Collaborative Agents for Multi-Input Planning and High-Order Reasoning on Devices" reveals the internal operational mechanisms of this system in detail The system consists of a cloud-based "high-level reasoning agent" and five specialized agents running on devices, working together to accomplish tasks that traditional LLMs cannot handle.

The five edge-side agents are:

Personal Context Agent: Responsible for searching information in the user's personal database to understand queries based on the user's personal background.

Device Information Agent: Retrieves data related to the device's status, such as the time and location the query was initiated, and the content displayed on the screen at that time.

User Perception Agent: Obtains the user's recent activity records on the device.

External Knowledge Agent: Collects data from external resources (such as web pages, Wikipedia, calculators).

Task Completion Agent: Calls applications on the device to respond to and fulfill the user's requests.

The research report provides a vivid example to illustrate its workflow. When a user says, "Help me find the cheapest flight to Barcelona next month and add it to my calendar. Also, notify my travel partner about our plans."

First, the "high-level reasoning agent" parses this complex instruction.

Then, it activates the "Device Information Agent" to obtain information about the current month;

Next, it calls the "Personal Context Agent" to identify who the "travel partner" is from the user's data;

Finally, it instructs the "Task Completion Agent" to search for flights in the ticketing application and notify the travel partner via email or messaging app once found.

Nomura believes that the revolutionary aspect of this model lies in its ability to legally and efficiently utilize personal and device-specific data that pure cloud LLMs cannot access, thereby providing truly personalized and seamlessly integrated services.

The Eve of the Edge AI Revolution: New Opportunities Are Emerging

Nomura points out in the research report that, due to the integration of external knowledge access capabilities, this model is expected to become a frequently used daily tool by the public, marking that we are on the eve of "edge AI" or "AI agents" entering real-world applications.

Looking ahead, it is anticipated that from 2026 onwards, market expectations for edge AI will further rise. The following areas of technological advancement will be key:

Personalization and Privacy Protection: How to provide stronger privacy protection technologies while utilizing personal data.

Instant Response Performance Improvement: This directly requires significant enhancements in wireless communication, processor (GPU), and memory bandwidth performance Expansion of Personal Data Scope: By integrating personal data from more sources such as wearable devices, the service range is extended to new areas like health and training recommendations.

Nomura believes that the future winners will not only be the companies with the largest models but also those that can achieve efficient, low-power, and high-security computing on the edge, successfully building a collaborative ecosystem of software and hardware.

Apple's layout indicates that the era of truly intelligent personal assistants may be approaching, and related hardware innovations will be the foundation for all of this