Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

To make the best cloud base in the model era, Baidu Smart Cloud makes three sets of "combined punches".

2024-10-09 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)12/24 Report--

On December 20, 2023 Baidu Cloud Intelligence Conference was held in Beijing, with the theme of "large Model Reconfiguration Cloud Computing, Cloud for AI", with a deep focus on the cloud computing changes caused by large models.

Baidu Intelligent Cloud said that in order to meet the landing needs of large models, cloud computing services are being reconstructed based on the "integration of cloud and intelligence" strategy. At present, it has completed the end-to-end upgrade and reconstruction from underlying infrastructure-large model development and application-AI native application development: comprehensive upgrade of more than 20 cloud computing infrastructure, including Baifan, Qianfan large model platform, and full opening of the AI native application development workbench "Qianfan AppBuilder".

At the scene of the intelligent calculation conference, Baidu Smart Cloud also released the latest "report card". Since the Wen Xin big model was fully opened to the whole society on August 31, the daily transfer amount of the big model API has increased 10 times on the Qianfan big model platform. At present, Qianfan platform has served more than 40,000 enterprise users, helping enterprise users fine-tune nearly 10,000 large models. Compared with the self-built system training large model, the cost of training using Qianfan platform can be reduced by up to 90%.

Big models are refactoring cloud computing

"the native era of AI has begun, and big models are driving innovation and change in the cloud computing industry." Hou Zhenyu, vice president of Baidu Group, said: "the large model reconfiguration of cloud computing is mainly reflected in three levels: AI native cloud will change the pattern of cloud computing, model as a service (MaaS) will become a new basic service, and AI native applications will give birth to a new R & D paradigm."

First of all, in the cloud infrastructure layer, the bottom layer of applications in the mobile Internet era mostly rely on CPU computing power, while the demand of AI applications for GPU or heterogeneous computing increases significantly, and the underlying computing power demand of the cloud market will gradually turn to GPU.

Second, in the model layer, large models are becoming a common service capability, namely MaaS. MaaS will greatly lower the threshold for Al landing and achieve real Al benefits.

Finally, in the application layer, the paradigm of application development has been completely subverted. The unique ability of large model understanding, generation, logic and memory will give birth to a new paradigm of native application research and development, and the whole application technology stack, data flow and business flow will be changed.

Hou Zhenyu, vice president of ▲ Baidu Group, said that generally speaking, to build a prosperous A native application ecology, large models, intelligent computing power, and the new paradigm of AI native application research and development complement each other. Large models are the "brain" of AI native applications, while intelligent computing provides solid support for A native applications to run. The new R & D paradigm helps developers to develop applications based on large model capabilities efficiently. The data flywheel is a necessary and sufficient condition for successful AI native applications, allowing large models to iterate at a high speed and the product experience to continue to improve.

Comprehensive upgrade of ▲ cloud computing: more than 20 full-stack products in five major areas will be released in the AI native era, and the cloud computing infrastructure system for large models will be fully reconstructed. Hou Zhenyu said Baidu Smart Cloud will restructure cloud computing services in three aspects, namely, model-oriented intelligent computing infrastructure, data-oriented data infrastructure and application-oriented cloud infrastructure to support the landing of AI native applications.

At the conference site, Baidu Intelligent Cloud released and upgraded more than 20 cloud computing products, covering intelligent computing, general computing, database and big data, distributed cloud, application development platform five major areas.

In the field of intelligent computing, computing power is the basic condition for the landing of large models. At present, large model training, reasoning and deployment put forward high requirements for high-speed interconnection, computing efficiency, computing cost and so on. It is necessary to build a new intelligent computing infrastructure. However, the current computing cluster still faces many challenges, such as long training time, error-prone and poor stability of the large model, large computing cluster scale and high system complexity, which also increase the difficulty of operation and maintenance.

The newly released Baidu Baidu AI heterogeneous computing platform 3.0 has been specially optimized for the native application of AI as well as the training and reasoning of large models. Baifu 3.0 has greatly upgraded its product capabilities in terms of stability, efficiency and easy operation and maintenance. the effective training time for tasks at the 10,000-calorie level has reached more than 98%, and the effectiveness of bandwidth can reach 95%. Compared with self-built intelligent computing infrastructure, model training and push throughput can be improved by up to 30% and 60%, respectively.

▲ Baidu Baidu 3.0 released the intelligent computing network platform in view of the supply balance of intelligent computing power in the native era of AI. At the level of computing resources, the intelligent computing network platform supports global access to intelligent computing nodes such as Baidu and third-party computing centers, supercomputing centers, edge nodes, and so on, connecting decentralized and heterogeneous computing resources to form a unified computing network resource pool. Then, through the arithmetic scheduling algorithm independently developed by Baidu, it intelligently analyzes the status, performance and utilization of all kinds of computing resources, and unifies the scheduling of computing power. Let smart computing resources be delivered flexibly, stably and efficiently to users in need, so as to realize the "south-to-north water diversion" of smart computing resources.

In the field of general computing, cloud native infrastructure such as computing, storage and network also needs to be reconfigured and upgraded for the native era of AI to provide more flexible, high-performance and intelligent operation and maintenance capabilities.

At this conference, Baidu Taihang Computing added three new computing examples: the newly launched 7th generation (general computing) CVM instance G7, which has a 10% improvement in overall performance compared with the previous generation products; and the release of Kunlun core elastic bare metal instance NKL5, equipped with Baidu's self-developed Kunlun core R300 acceleration processor, which can improve the comprehensive performance of large model reasoning scenarios by up to 50% compared with mainstream acceleration cards in the industry. Release NH6T, an elastic high-performance computing instance based on the Quateng 910B acceleration processor. In the large model training scenario, the overall performance is up to 40% higher than that of the mainstream accelerator cards in the industry.

At the same time, CHPC (Cloud HPC), a high-performance computing platform, has been officially released to provide users with one-stop public cloud HPC services. At the resource usage level, CHPC allows users to create a high-performance computing environment with one click and use cloud resources flexibly according to business changes; at the business application level, CHPC supports integrated drug research and development, gene sequencing and other industry applications. In addition, combined with Baidu network disk and other services integrated in VPC, users can easily achieve HPC source files from submission, upload, processing, result return, to the distribution of data full link access, helping R & D to improve efficiency.

‎ in terms of distributed cloud, Baidu Intelligent Cloud brings three major upgrades: edge computing node BEC product capability upgrade, to create a global unified edge computing network and product experience, to create the most comprehensive "cloud edge" in the era of AI native application; the release of new proprietary cloud ABC Stack capabilities to support the local deployment of Baidu Intelligent Cloud Qianfan model platform The new capability of local computing cluster LCC is released, which supports a new generation of CPU / GPU instances and complete Baidu intelligent cloud AI&HPC cluster management capabilities, further enriching and improving the infrastructure and cloud product support capability matrix.

The landing of ‎ large model not only needs the support of computing power, but also needs to store, manage and analyze all kinds of data and massive knowledge. Baidu Intelligent Cloud has released a series of blockbuster new products in the areas of data infrastructure such as cloud storage, cloud native database, big data platform and so on.

Baidu Canghai Storage, with the official release of the unified technology base, can support all kinds of storage products and meet the large-scale, high-performance and low-cost storage requirements of the native era of AI. The newly upgraded object storage BOS, cloud disk CDS, parallel file storage PFS and other products have been comprehensively enhanced for data lake storage and AI storage, accelerating intelligent computing and releasing data value.

The cloud native database GaiaDB 4.0 has been officially released to enhance parallel query capabilities, break through the bottleneck of stand-alone computing, and achieve cross-machine multi-core parallel queries. The performance is improved by more than 10 times in mixed load and real-time analysis business scenarios. The inventory index and storage engine are introduced for different workloads to improve the query speed of data of different sizes. The storage engine can support complex analysis of PB-level data at most and is strictly isolated from the complexity of transaction business. Through some column data flow depth optimization, such as consensus protocol optimization, link optimization, adaptive dynamic playback storage and multi-version storage, the overall performance of GaiaDB is greatly improved by more than 60%.

During the conference, Cheng Pingyao, general manager of Hangzhou Geely Yiyun Technology Co., Ltd., shared Geely Group's joint efforts with Baidu Smart Cloud to build a group proprietary cloud and a digital infrastructure base. At present, Geely Group has achieved vehicle networking, manufacturing business cloud, and is based on this set of efficient and stable infrastructure platform to build enterprise-level AI large model capabilities, all-round empowering group business.

▲ ‎ Hangzhou Geely Yiyun Technology Co., Ltd. General Manager finished Product Yao Model as a Service (MaaS): Qianfan large model platform upgrade

In the native era of AI, the large model will be provided to the broad masses of users by the model as a service (MaaS) platform as a new general service capability. Baidu Intelligent Cloud Qianfan large Model platform (hereinafter referred to as "Qianfan platform"), as the industry's leading MaaS service platform, presets 54 mainstream basic large models and industry large models, including Baidu Wenxin Model, and provides the most perfect and easy-to-use tool chain for continuous pre-training, fine tuning, evaluation, compression and deployment of large models. Help customers quickly customize their own large models according to their own business scenarios. Compared with the self-built system training large model, the cost of training using Qianfan platform can be reduced by up to 90%.

▲ Baidu Intelligent Cloud Qianfan large model platform at present, Qianfan platform has served more than 40000 enterprise users and fine-tuned nearly 10000 large models. This intelligent calculation conference, in view of the two core demands of "improving efficiency" and "reducing cost" that customers are most concerned about in applying the big model, Qianfan platform released a series of new functions.

‎ in terms of data, Qianfan platform provides a complete and efficient large model data management tool chain, including data collection, cleaning, automatic tagging, automatic enhancement and reasoning data multi-bit evaluation and other functions, to help users quickly build their own business "data flywheel" to achieve feedback-driven growth. The newly released functions such as data statistical analysis and data quality check support users to obtain omni-directional data insight information. Combined with data cleaning visualization Pipeline, it can build high-quality data fuel facing large model scenes and "escort" large model training. Therefore, Qianfan has become the first MaaS service platform in China to support multi-directional data analysis of large models.

As the "last kilometer" before the model is deployed, the model evaluation can comprehensively evaluate the output effect and computing performance of the large model, so as to ensure the stability and reliability of the native application of AI after the launch of the large model. Qianfan platform innovatively introduces the dual evaluation mechanism of the combination of automation and labor, gives full play to the advantages of both sides, and greatly improves the efficiency and quality of model evaluation. On the one hand, Baidu Wenxin big model, as an AI referee, can automatically score the answers of the evaluated model, greatly reducing a large amount of repetitive manual work; on the other hand, by scoring the answers of large models by data markers / business experts, it can achieve a refined measurement of the quality of large model answers.

In addition, the flexible pricing methods of Qianfan platform, such as Tokens (for services with high flexibility), TPM (Tokens per Minute for businesses with large concurrency), batch computing (for offline content production and other tasks with low real-time requirements and large throughput), and computing units (for customers who need exclusive hardware deployment models), can easily meet the needs of customers in a variety of business scenarios. Help enterprises to use large models at low cost.

Enabling AI Native Application Development: Qianfan AppBuilder official Open Service

In order to meet the needs of agile and efficient AI native application development and reduce the threshold of AI native application development, Baidu Intelligent Cloud Qianfan AppBuilder officially opened its service.

‎ AppBuilder precipitates the common patterns, tools, and processes of developing AI native applications with large models into a workbench to help developers focus on the business itself without having to put extra effort into the development process. Specifically, AppBuilder is mainly composed of two layers of services: component and framework.

The "component" service is composed of multi-modal AI capability components (such as text recognition, text graph, etc.), large language model-based capability components (such as long text summary, nl2sql, etc.), and basic components (such as vector database, object storage, etc.). It is a component encapsulation of the underlying service capabilities, allowing each component to complete a specific function.

The "framework" is to selectively connect and combine these components so that they can complete the task of a particular scene more completely. At present, the retrieval enhanced generation (RAG), agent (Agent) and intelligent data analysis (GBI) provided by AppBuilder are commonly used native application frameworks of AI.

AppBuilder provides two product forms, code state and low code state. For users with deep AI native application development needs, AppBuilder code mode provides various development kits and application components, including SDK, development environment, debugging tools, application sample code, and so on, while AppBuilder low code mode provides visualization tools, and users can quickly customize and launch AI native applications with a simple click.

Robin Li, founder, chairman and CEO of ▲ Baidu Intelligent Cloud Qianfan AppBuilder, believes that the prosperous ecology of AI native applications will drive economic growth. In October this year, Baidu Smart Cloud launched China's first large model full-link ecological support system, providing partners with all-round support, including enabling training, AI native application incubation, sales opportunities, marketing, etc., committed to the prosperity of AI native application ecology.

Hou Zhenyu predicts that 2024 will be the first year of AI native applications, ushering in the explosive growth of AI native applications. Baidu Smart Cloud will continue to launch competitive product solutions, work with partners to cultivate customer application scenarios, and enable more AI native application innovation to emerge.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report