Flash News

Google Cloud CEO: Gemini Will Support Apple's Personalized Siri

Google Cloud CEO Thomas Kurian confirmed in a recent public statement that Google is acting as Apple's preferred cloud partner, providing Gemini-based technology and computing power for Apple Foundation Models to create a more "personalized" next-generation Siri. A joint statement released by both parties this year indicates that Apple's next-generation self-developed foundation models will be built on Gemini technology, offering users an upgraded intelligent assistant experience through Apple Intelligence features.

According to Bloomberg reporter Mark Gurman, Apple is testing a standalone Siri application in iOS 27 and macOS 27, upgrading it from a system-level voice assistant to a chatbot-like conversational interface. This new interface will support text/voice dual-mode conversations, a history list, favorites, and search functions, integrating visual presentation into the top dynamic island area with a new "Ask Siri" entry. Related reports and industry analyses suggest that the new version of Siri will be powered by Apple Foundation Models while integrating Gemini capabilities on the cloud side, backed by a multi-year collaboration arrangement worth about $1 billion per year, with the first results expected to be officially showcased at this year's WWDC.

Source: Public Information

ABAB AI Insight

This statement clarifies a key fact: Apple has not chosen to develop everything in-house but has partially "outsourced" the foundational model layer to Google, using Gemini to provide technology and cloud support for its self-developed Apple Foundation Models. For Google, this expands its model from a "direct product for consumers" to the infrastructure of the entire mobile ecosystem; for Apple, it introduces the strongest external general model as a transitional solution while maintaining control over front-end experience and privacy narrative.

From an industry structure perspective, this means that intelligent assistants are entering a "dual-layer architecture" stage: the front end is controlled by Apple, managing interaction, system integration, and branding (Siri, Apple Intelligence), while the back end is powered by large models like Gemini, providing reasoning and generation capabilities. This separation redistributes pricing power and discourse: platform vendors are responsible for entry and experience, while model/cloud vendors control computing power and algorithm foundations. The focus of their competition is no longer on single product competition but on long-term revenue sharing, data usage, and route control.

For Apple, the independent Siri application, conversation history, favorites, and the unified entry design of "Search or Ask" indicate an attempt to upgrade Siri from a "function key" to an operating system-level AI hub, allowing users to seamlessly switch between search, commands, and conversations. This essentially rewrites the human-computer interaction layer: once Siri becomes the default entry point for questions and execution agent, the traffic and time allocation of iOS and macOS will be restructured, further concentrating the discourse power of the app ecosystem between the system and the assistant.

In the longer term, the "Gemini integration into Siri" marks a critical point in the mobile ecosystem's shift from "app-centric" to "assistant-centric." Over the past decade, value has primarily accumulated in apps and app stores; moving forward, the true entry point may become an AI agent centered around personal context, responsible for interpreting intent, orchestrating applications, and invoking services. In this structure, whoever controls the assistant controls demand distribution and data loops, while the collaboration and competition between model suppliers and terminal platforms will determine the ultimate distribution of computing power, privacy, and commercial interests within the entire ecosystem.

AppleGoogle

Source

·ABAB News
·
3 min read
·8d ago
分享: