Apple plans to buy Google AI services for $1 billion, with a 1.2 trillion parameter model helping Siri upgrade significantly

Apple plans to buy Google AI services for $1 billion, with a 1.2 trillion parameter model helping Siri upgrade significantly

The new version of Siri is expected to be launched next spring, and Google’s Gemini model will be responsible for handling core functions such as information synthesis and task execution for Siri. However, Apple emphasizes that this is only a transitional solution and is still developing its own 1 trillion parameter model. After the announcement, the stock prices of the two companies briefly jumped to intraday highs on Wednesday.

Apple is planning to use Google’s 1.2 trillion parameter artificial intelligence model to provide technical support for its long promised upgrade of Siri voice assistant.

On November 5th, according to sources cited by the media, the two sides are finalizing an agreement where Apple will pay approximately $1 billion annually to acquire the right to use Google technology.

The new version of Siri is expected to be launched next spring, and Google’s Gemini model will be responsible for handling core functions such as information synthesis and task execution for Siri.

After the announcement, the stock prices of the two companies briefly jumped to intraday highs on Wednesday. Apple’s stock price rose less than 1% to $271.70, while Alphabet’s stock price rose as much as 3.2% to $286.42 at one point, but then both gains fell back.

3a308b00-00e3-4cf2-b782-f6e4e25aa747

Although Apple emphasizes that this is just a transitional solution and is still developing its own 1 trillion parameter model, Google Gemini 2.5 Pro is currently leading in most large language model rankings, and catching up is not an easy task.

Significantly enhance AI processing capabilities

The Gemini system customized by Google represents a significant technological leap.

Compared to the current 150 billion parameter model used in Apple’s intelligent cloud version, the Gemeni model’s 1.2 trillion parameter scale will significantly expand the system’s processing power and level of understanding complex data and context.

Gonglubian.com previously mentioned that Apple had considered using other third-party models to complete this task. After testing Gemini, OpenAI’s ChatGPT, and Anthropic’s Claude, Apple locked onto Google earlier this year. Apple hopes to use this technology as a temporary solution until its own model is powerful enough.

The project, codenamed Glenwood within Apple, is led by Vision Pro headset developer Mike Rockwell and software engineering director Craig Federighi. The new voice assistant itself is planned to be used on the iOS 26.4 system, codenamed Linwood.

Clear division of labor in technical architecture

According to reports, according to the agreement arrangement, Google’s Gemini model will handle Siri’s information summarization and task planning functions, which help the voice assistant integrate information and decide how to perform complex tasks. Some Siri features will continue to use Apple’s internal model.

The model will run on Apple’s own private cloud computing servers, ensuring that user data is isolated from Google’s infrastructure. Apple has allocated AI server hardware to support the operation of the model.

The report states that although the scale of cooperation is considerable, it is unlikely to be publicly promoted. Apple will see Google as a behind the scenes technology supplier.

This makes the agreement different from the deal between the two companies in terms of Safari browser, which set Google as the default search engine.

This protocol is also independent of previous discussions about integrating Gemini directly as a chatbot into Siri.

These discussions were close to being reached in 2024 and earlier this year, but ultimately failed to materialize. This collaboration also does not involve embedding Google AI search into the Apple operating system.

During Apple’s recent earnings conference call, CEO Tim Cook stated that Siri may eventually offer more chatbot options beyond the current ChatGPT.

Apple is not the only company using Gemini to drive AI functionality, as many large companies such as Snap are building applications based on Google’s Vertex AI platform. But for Apple, this move marks its acknowledgement that it has fallen behind in the AI field and is willing to rely on external technology to catch up.

The path of self-development is still ongoing

Apple still does not want Gemini as a long-term solution.

According to sources cited by the media, although the company is losing AI talent, including model team leaders, the management still plans to continue developing new AI technologies and hopes to eventually replace Gemini with internal solutions.

To this end, the Apple Model team is developing a cloud based model with 1 trillion parameters, hoping to be available for consumer applications as early as next year.

Apple executives believe that they can achieve a quality level similar to customized Gemini products. But Google is also continuously enhancing Gemini, catching up is not an easy task. The Gemini 2.5 Pro version currently ranks high on most of the larger language model rankings.

 
© 版权声明
THE END
If you like it, please support it
点赞7 分享
comment 抢沙发

请登录后发表评论

    暂无评论内容