Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Apple has made a technological breakthrough: it is expected to run a large language model on iPhone

2024-07-13 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)12/24 Report--

CTOnews.com, December 21 (Xinhua)-- Apple's artificial intelligence researchers say they have made a major breakthrough in successfully deploying a large language model (LLM) on Apple devices with limited memory through an innovative flash technology. This achievement is expected to enable more powerful Siri, real-time language translation, and cutting-edge AI functions integrated with photography and augmented reality to the future iPhone.

In recent years, LLM chat robots like ChatGPT and Claude have become popular all over the world. They can conduct smooth dialogue, write different styles of text, and even generate code, showing a strong ability of language understanding and generation. However, these models have an Achilles' heel: they extremely "eat" data and memory, and ordinary mobile phones simply cannot meet their operational needs.

To break through this bottleneck, Apple researchers have opened up a new path, turning their attention to flash memory, which is ubiquitous in mobile phones, where apps and photos are stored. In the paper entitled "LLM in flash memory: efficient large-scale language model reasoning with limited memory (LLM in a flash: Efficient Large Language Model Inference with Limited Memory)", the researchers proposed an ingenious flash technology to store the data of the LLM model in flash memory. The authors point out that the capacity of flash storage in mobile devices is much larger than the traditional RAM used to run LLM.

CTOnews.com notes that their approach cleverly uses two key technologies to bypass limitations to minimize data transfer and maximize flash memory throughput:

Windowing: you can think of it as a method of recycling. The AI model does not load new data every time, but reuses some of the processed data. This reduces the need to read memory frequently and makes the whole process more smooth and efficient.

Row-column bundling (Row-Column Bundling): this technique is similar to reading a book, not word for word, but paragraph by paragraph. By grouping data more efficiently, you can read data from flash memory faster, thus speeding up AI's ability to understand and generate languages.

The paper points out that this technology can make the AI model run twice as much as the available memory of iPhone. With the help of this technology, LLM's reasoning speed has been increased by 4-5 times on Apple M1 Max CPU and 20-25 times on GPU. "this breakthrough is essential for the deployment of advanced LLM in resource-constrained environments, greatly expanding their applicability and accessibility," the researchers wrote.

Breakthroughs in faster and more efficient iPhone AI artificial intelligence open up new possibilities for future iPhone, such as more advanced Siri functions, real-time language translation, and complex AI drivers in photography and augmented reality. The technology also lays the foundation for iPhone to run complex artificial intelligence assistants and chatbots on devices, which Apple is said to have begun.

Apple's generative AI may eventually be integrated into its Siri voice assistant. In February 2023, Apple held an artificial intelligence summit and introduced its large-scale language modeling work to employees. According to Bloomberg, Apple's goal is to create a smarter Siri that is deeply integrated with AI. Apple plans to update the way Siri interacts with information applications so that users can deal with complex problems more effectively and complete sentences automatically. In addition, it is rumored that Apple plans to add AI to as many Apple apps as possible.

Apple GPT: Super brain in pocket Apple is reportedly developing its own generative AI model, code-named "Ajax", which aims to compete with OpenAI's GPT-3 and GPT-4 with 200 billion parameters, suggesting that it has a high degree of complexity and power in language comprehension and generation. Ajax, internally known as "Apple GPT", is designed to unify Apple's machine learning development, indicating that Apple has integrated AI more deeply into its ecosystem.

According to the latest report, Ajax is considered to be more powerful than the earlier ChatGPT 3.5. However, some sources also pointed out that OpenAI's new model may have exceeded the capabilities of Ajax.

Both The Information and analyst Jeff Pu claim that Apple will offer some form of generative AI functionality on iPhone and iPad around the end of 2024, when iOS 18 is released. Pu said in October that Apple would build hundreds of AI servers in 2023 and more in 2024. According to reports, Apple will offer a combination of cloud-based AI and device-side processing AI.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report