Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account


From software and hardware to ecology to accelerate the AI PC revolution, Nvidia proved that RTX is AI by strength.

2024-05-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >


Shulou( Report--

A plug-in triples the speed of the graphics card? Unlock the latest trump card of the AIGC era.

Author | Yunpeng

Editor | Moying

The recent AIGC circle has really ushered in the best part of a new product release!

First Google Gemini suddenly blew up late at night, hard OpenAI's GPT-4, then Stanford scientist Li Feifei's team unveiled the AI video generation model W.A.L.T, punched Pika and kicked Gen-2, and recently Microsoft released a small language model that can run side by side, with an average performance even better than Llama 2.

▲ W.A.L.T Venson video clip AI big model is popular, almost all kinds of technology giants are "All in AI", and the AI arms race has quickly rolled from the cloud to the side, from smartphones to PC, all kinds of familiar smart hardware around us have been involved in this big model wave.

All kinds of intelligent assistants and all kinds of AIGC-related applications are springing up like bamboo shoots after a spring rain, and all kinds of "GPTs" have gradually entered the public view.

Under hot conditions, the end-to-side landing of the AI model is inseparable from the support of the underlying hardware. Nvidia, Intel, AMD and other big manufacturers also continue to show a variety of new software and hardware products to cope with the new era of AI, accelerating the arrival of the AI PC era. As the most powerful consumer product, PC has become one of the most ideal platforms for end-to-side application of AI large model.

There is no doubt that AI will become a key inflection point in the development of the PC industry, and AI will completely change the experience of players, creators, office workers, students and even every ordinary PC user.

▲ pictures created by Bing, source: PCWorld currently carries more than 100 million Windows PC and workstations of Nvidia RTX GPU around the world. As the core full-stack player in the era of AI models, Nvidia is enabling the AI performance of these "RTX PC" to double through full stack ecology.

In the familiar Stable Diffusion application, based on Nvidia's RTX special acceleration plug-in, a RTX 4090 can generate 100 high-quality images in just 49 seconds, tripling the speed, and such an upgrade does not even need to change other hardware configurations.

The application of Nvidia RTX-related technology in the AI field makes it easier and more efficient for countless developers around the world to create AI applications, and the way people use PC is also changing imperceptibly.

How did Nvidia build the foundation for the bottom of the AI PC era? What is Nvidia's deepest hidden trump card in the AIGC era? Today, from hardware to software to ecology, Nvidia seems to have been equated with AI.

01. From general computing to accelerated computing, from data center to PC, Nvidia CUDA ecologically occupies C what are the key variables brewing in today's computing industry, and what role does Nvidia play in it?

As Nvidia CEO Huang Renxun mentioned in the earnings call, looking at today's global computing industry, there are two important changes worth noting: first, the traditional general-purpose computing, which uses a general-purpose processor to handle all work, no longer has the advantage of cost and efficiency, and the opposite "accelerated computing" will become mainstream.

As the name implies, the data center needs to "accelerate" all workloads as much as possible in order to be more performance, energy efficient, and cost-effective.

Second, under the general trend of accelerating computing, new ways of software development have become possible, which has also promoted the transformation of software platforms, making previously impossible applications possible.

Perhaps as Huang Renxun said, AI is not a luxury, AI is a necessity, and AI investment is a strategic and urgent need that can help enterprises improve their competitiveness in the future.

In this "can not lose" AI war, Nvidia's GPU has come to the center of the stage and the center of the global "AI new industry". In Huang Renxun's view, today's data center is like a "AI factory". Data is a raw material that is produced, developed, refined and transformed into the most valuable thing in the world-intelligence.

Obviously, this is an innovation of technology and even business paradigm for all technology giants.

Against this backdrop, various industries are undergoing a platform transition from general computing to accelerated computing and generative AI, as evidenced by the nearly 280% year-on-year growth of data center business in Nvidia's latest quarterly results.

AI model start-ups, consumer Internet companies and global cloud giants are actively "preparing for war". Major cloud service providers continue to increase their investment in AI cloud, enterprise software companies have also successively added AI-related applications and functions to their platforms, and many manufacturers have launched customized AI products, promoting the development of intelligence and automation in major industries.

Under the upsurge, Nvidia's GPU, CPU, network, AI foundry, AI enterprise software solutions and other products and services have become the core "engine" to accelerate this transformation.

The intelligence and automation of the above industries are closely related to the computing industry. In the computing industry, in addition to the "accelerated computing" transformation of the data center, the PC industry, as a key part of computing, is also undergoing similar changes.

At present, all kinds of generative AI products are rapidly becoming the pillar applications of high-performance PC, playing an important role in the daily work of practitioners in various industries. And Nvidia's RTX GPU has undoubtedly become the core underlying technical support in the AI PC era.

Why can Nvidia's GPU get to such a critical position in the AI era and is almost "irreplaceable"?

In fact, when it comes to Nvidia's GPU, you have to mention CUDA. In Huang Renxun's view, Nvidia accelerated computing by inventing CUDA, a new programming tool, and GPU, a processor.

What CPU cannot do efficiently, GPU can accelerate effectively and has significant advantages in terms of performance and energy costs. After nearly 25 years of development, CUDA-based GPU has been deeply bound with developers, system manufacturers, cloud service providers, technology manufacturers and users, and the CUDA ecosystem has been trusted by various industries, which is one of the fundamental factors that Nvidia is irreplaceable.

In the development of deep learning and AI model, accelerated computing proposed by Nvidia plays a key role, and the rise of generative AI is closely related to it, which is also called "the fourth Industrial Revolution" by many people.

In Huang Renxun's view, intelligence is the most valuable. If intelligence can be produced in batches and automations, the value it brings is inestimable.

What Nvidia is doing now is to push this future into reality.

02. Compatibility architecture paves the way, TensorRT-LLM doubles reasoning performance, and AI drawings enter the "second era" as mentioned earlier, Nvidia's layout in the technical ecology, especially 25 years of deep cultivation in the CUDA field, is the key to its ability to occupy C position in the AI era. The energy released by this ecological layout in the PC industry is becoming more and more prominent.

In the fourth quarter of this year, Nvidia released an engine-optimized compiler called TensorRT-LLM, which aims to further improve the AI reasoning performance of large language models.

The training of AI large model in the cloud requires a lot of computing power, and when AI large model really enters everyone's life, it needs to solve the problem of reasoning, which is the "last kilometer", because the reasoning task in the actual scene is often extremely complex.

According to official data, with the blessing of TensorRT-LLM, for large language models such as Lambda2, the reasoning ability of H200 can be increased to twice that of H100, and the cost can be reduced exponentially. Compared with H100, H200's reasoning performance on GPT-3 model has improved 18 times. Thanks to this, Nvidia customers can use larger models, but the delay will not increase.

▲ TensorRT-LLM v0.6.0 can bring up to 5x improvement in reasoning performance. In fact, TensorRT-LLM has such a performance, which is closely related to the characteristics of CUDA. Nvidia CFO specifically mentioned in the earnings call that they were able to create TensorRT-LLM because CUDA is programmable, and if CUDA and its corresponding GPU are not programmable, it is difficult to iterate and improve the software stack at such a fast rate.

After more than 20 years of deep ploughing, every Nvidia GPU is supported by a software stack that is constantly iterated and updated, and the flexibility and compatibility of CUDA are all outstanding advantages of this ecology.

Nvidia has a large ecosystem of software developers, ecosystem of system manufacturers and distribution collaboration networks, and it is the ecological and architectural compatibility of Nvidia CUDA software that really ties them together to form an efficient ecosystem.

Nvidia CFO said that building everything based on compatibility was a great decision they made decades ago, and ensuring architecture compatibility has always been their top priority.

Whenever Nvidia introduces a new feature, function, or technology, developers in the ecology will immediately benefit from all aspects and enjoy these dividends. At present, Nvidia has 28000 employees around the world, they serve all parts of the world, various industries, different markets and companies, but can still maintain efficient collaboration, which is closely related to good compatibility.

This compatibility further brings about the stability of the Nvidia platform, which is one of the key reasons why almost all kinds of new applications around the world choose to take the lead in developing and optimizing the Nvidia platform.

There are millions of Nvidia GPU in the data center of cloud computing, and there are more than 100 million Nvidia GPU in the hands of global PC and workstation users. They are architecturally compatible, so all technological innovations based on Nvidia platform can be quickly applied to these millions and hundreds of millions of products.

This can also be said to be one of the core advantages of Invid without me.

Finally, in terms of accelerated computing, Nvidia GPU can accelerate Spark, Python and even Pandas, the most successful data science framework so far. It is understood that Pandas is now accelerated by Nvidia's CUDA and can be used without a line of code.

Outside the enterprise and professional field, the acceleration brought by Nvidia GPU is also extremely perceptive to the average user.

In the fourth quarter of this year, Nvidia also brought TensorRT-LLM for Windows when it released TensorRT-LLM. At the same time, Microsoft also released OpenAI Chat API's TensorRT-LLM encapsulation interface, RTX-driven performance improvement DirectML for Llama 2 and other new tools and resources at the Ignite conference.

It can be said that the end users of Windows PC can also enjoy the accelerated dividend brought by TensorRT-LLM.

According to official data, TensorRT-LLM   for Windows can improve the reasoning performance of large language models on terminal devices by up to four times, while Nvidia RTX GPU has installed more than 100 million units, and this new feature has become rapidly and widely available, which is good news for application developers.

The introduction of TensorRT-LLM for Windows undoubtedly means that the AI model can be better applied in end-side RTX PC to meet users' various AIGC needs and enhance users'AI PC experience.

Hundreds of AI-related developer projects and applications can be run locally on PC equipped with RTX GPU, and users' private and proprietary data can also be saved locally on PC.

It is worth mentioning that TensorRT-LLM is also constantly updated to support more new hot big models, such as Mistral 7B and Nemotron-3 8B, these versions of TensorRT-LLM can run directly on GeForce RTX 30 series and 40 series GPU with 8GB and above video memory.

The configuration requirements of ▲ TensorRT extension are produced by Bilibili UP Master Nenly students according to the test of UP Master Nenly classmate, a professional designer on the Bilibili platform. With the blessing of TensorRT, the model reasoning speed generated by Stable Diffusion, a popular textual graph application based on RTX GPU, has directly increased by 2 times or even more than 3 times, and AI painting has entered the "second era".

The difference in the number of pictures drawn per minute between the ▲ standard StableDiffusion and the optimized TensorRT engine is produced by the Bilibili UP master Nenly classmate and released according to the Nenly classmate. On the GeForce RTX 4090, the StableDiffusion runs seven times faster than the Mac top version using Apple M2 Ultra. Even 4060Ti, with the support of the TensorRT extension, generates images at a speed of more than 4090 before acceleration.

For some creative workers who need to produce a large number of sketches, the benefits of this acceleration are extremely obvious. Faced with the work of handling thousands of pictures, the time saved may be calculated in days.

For example, Zhao Enzhe, who is known as "Liu Cixin in the field of illustration" and "the first person in domestic science fiction painting", is also the first Chinese artist to win the global Hugo award. Zhao Enzhe uses the whole process of Stable Diffusion accelerated by GeForce RTX GPU to create in his work. AI auxiliary creation tools can save time for refinement and give him a lot of creative design possibilities that he does not intend to do.

▲ "boat of nothingness"-using SD to create, the whole process is accelerated by GeForce RTX GPU, Zhao Enzhe said, in fact, every creator is eager to present the world in his mind perfectly, but due to the limitations of technical threshold and industrialization process, they can only make a compromise between conceptual design and final presentation in the past. But today, with the powerful computing power of Nvidia GeForce RTX graphics cards and AI creative tools such as SD and RUNWAY, conceptual designers can break through the limit and try all ideas in just a few seconds. I believe that with more powerful arithmetic power in the future, every artist can achieve unlimited creation!

▲ Zhao Enzhe, of course, based on these capabilities of RTX GPU, relevant companies can also build the most effective acceleration engines based on their own models to maximize the benefits of computing power, so as to achieve significant cost reduction and efficiency.

It can be said that, from enterprises to individuals, from data centers to PC, based on decades of solid ecology, Nvidia is bringing everyone closer to the AI model through technological innovation.

03.DLSS uses AI to rewrite the game industry, and Nvidia is armed to the teeth for developers. RTX is AI Today, we have seen a subversive upgrade brought by Tensor RT to the ability to run AI models end-to-side in PC. In addition, Nvidia RTX GPU has been in the field of AI for many years, and now the mention of AI is almost tantamount to mentioning Nvidia. In the words of Nvidia CFO, "RTX is AI".

In the field of games, this characteristic is particularly obvious. The most representative of these is DLSS technology, which is also one of the earliest AI models launched by Nvidia. Since its debut five years ago, DLSS technology has experienced several large version iterations, and the integration of AI technology has become more and more in-depth. At present, it has included three different models: Super resolution, frame generation and light reconstruction.

It is understood that from the beginning of the release, DLSS technology has been inseparable from the continuous learning of the AI model behind. The results of AI model learning continue to feed back and iterate, promoting the improvement of DLSS technology, thereby improving the performance and image quality brought by DLSS technology in the game.

This year, Nvidia launched DLSS 3.5, which is a very obvious boost to the development of game graphics rendering technology. Based on the new AI model light reconstruction, DLSS 3.5 can create higher quality ray tracing images while further improving the game's frame rate performance, which can be said to kill two birds with one stone.

This technology has also been widely praised by game players. According to official figures, there are now more than 500 games and applications that support RTX features, and the growth of this ecosystem can be said to have brought a real intuitive improvement to players' experience.

In addition to games, in the field of productivity creation, according to official data, Nvidia's RTX GPU has accelerated more than 110 creative applications, especially in generative AI-related applications, where RTX GPU is more widely used.

Today, the door of the AI PC era has been pushed open, and various manufacturers are actively finding their own positioning and laying out their products and technologies. In this era, developers play a very important role. With the blessing of RTX, developers can now directly use the cutting-edge AI model to deploy their applications through cross-vendor API.

What Nvidia has been doing is to enhance the ability of developers to arm developers to the teeth in the new era of AI. The new optimization, new models and resources provided by Nvidia will undoubtedly accelerate the development and deployment of AI functions and applications on more than 100 million RTX PC worldwide, and the integration of AI and PC will become easier and easier.

04. Conclusion: if you want to play AI in the AIGC era, Nvidia is completely unable to bypass the AIGC era, countless start-ups are pouring into the track, consumers are also scrambling to get in touch with new technologies and feel the experience innovation brought about by generative AI, and Nvidia has undoubtedly become a star enterprise in the forefront, from hardware, software to ecology, Nvidia trump card after another.

The development of AI technology has caused the innovation of the paradigm of the whole computing industry, accelerated the popularization of the concept of computing, and affected enterprises in various industries all over the world. Nvidia's technology is going everywhere from the data center to the PC in each of our homes, playing a key role in the landing of AI technology.

Looking back at the history of the development of the science and technology industry, we can clearly see that all the most successful companies win by virtue of ecology, from the ecology of their own products and technology to the solid ecology formed by players in all fields of the industrial chain. ecology is bound to become the core focus of today's science and technology giants.

In any case, in the current AIGC era, if you want to ride the wind and walk at the top of the waves, the Nvidia ship must be boarded.

This article comes from the official account of Wechat: ID:aichip001, author: Yun Peng

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information


© 2024 SLNews company. All rights reserved.