In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2024-10-11 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)12/24 Report--
According to statistics, Nvidia sold about 500000 H100 and A100 GPU in the third quarter. Behind the explosion of the big language model is the competition among organizations for GPU, which is Nvidia's nearly thousand tons of graphics card shipments.
According to statistical analysis by market tracking company Omdia, Nvidia sold about 500000 H100 and A100 GPU in the third quarter!
Previously, Omdia estimated that Nvidia sold about 900t GPU through its second quarter sales.
Behind the popularity of the big language model, Nvidia has built a powerful graphics card empire.
Under the tide of artificial intelligence, GPU has become the object of global competition among institutions, companies and even countries.
In the third quarter of this fiscal year, Nvidia generated $14.5 billion in revenue from data center hardware, almost quadrupling from the same period last year.
This is obviously thanks to the H100 GPU, which has become hot with the development of artificial intelligence and high performance computing (HPC).
According to market tracker Omdia, Nvidia sold nearly 500000 A100 and H100 GPU, resulting in a huge demand that it takes 36 to 52 weeks for H100-based servers to be delivered.
As can be seen from the picture above, Meta and Microsoft are the biggest buyers. They each purchased as many as 150000 H100 GPU, far more than Google, Amazon, Oracle and Tencent (50,000 each).
It is worth noting that most of the servers GPU are supplied to very large cloud service providers. Server original equipment manufacturers (such as Dell, Lenovo, HPE) are currently unable to get enough AI and HPC GPU.
Omdia expects Nvidia to sell more than 500000 H100 and A100 GPU by the fourth quarter of 2023.
However, almost all companies that buy large quantities of Nvidia H100 GPU are customizing their own chips for artificial intelligence, HPC and video workload development.
As a result, as they switch to their own chips, purchases of Nvidia hardware may gradually decrease.
The above figure shows that server shipments fell 17% to 20% year-on-year in 2023, while server revenue rose 6% to 8% year-on-year.
Vlad Galabov, director of cloud and data center research practice at Omdia, and Manoj Sukumaran, chief analyst for data center computing and networking, predict that the server market will be worth $195.6 billion by 2027, more than double what it was a decade ago.
As large companies turn to super-heterogeneous computing or use multiple coprocessors to optimize server configurations, the demand for server processors and coprocessors will continue to grow.
At present, as far as servers running artificial intelligence training and reasoning are concerned, the most popular servers for large language model training are Nvidia DGX servers with 8 H100 / A100 GPU and Amazon AI reasoning servers with 16 custom coprocessors (Inferentia 2).
For video transcoding servers equipped with many custom coprocessors, the most popular are Google video transcoding servers with 20 VCU (video coding units) and Meta video processing servers with 12 scalable video processors.
As the requirements of some applications mature, the cost-effectiveness of building optimized custom processors will become higher and higher.
Media and artificial intelligence will be the early beneficiaries of superheterogeneous computing, followed by similar optimizations for other workloads such as databases and web services.
The Omdia report points out that the increase in the number of highly configured artificial intelligence servers is driving the development of physical infrastructure in the data center.
For example, rack power distribution revenue in the first half of this year increased by 17% over last year, and data cabinet thermal management revenue is expected to grow by 17% by 2023 under the trend of the need for liquid cooling solutions.
In addition, with the popularity of generative artificial intelligence services, enterprises will widely adopt AI, and the current bottleneck of artificial intelligence deployment speed may be power supply.
Enthusiastic buyers in addition to the above-mentioned giants, "private" organizations and companies have purchased NVIDIA H100 to develop their own business or invest in the future.
Bit Digital is a sustainable digital infrastructure platform providing digital assets and cloud computing services, headquartered in New York. The company has signed terms with customers to carry out Bit Digital AI business to support customers' GPU accelerated workloads.
According to the agreement, Bit Digital will provide customers with a minimum of 1024 and a maximum of 4096 GPU rental services.
Meanwhile, Bit Digital has agreed to purchase 1056 NVIDIA H100 GPU and has paid the first deposit.
The BlueSea Frontier Compute Cluster (BSFCC), created by the American company Del Complex, is essentially a huge barge containing 10000 Nvidia H100 GPU, with a total value of US $500 million.
A non-profit group called Voltage Park bought 24000 Nvidia H100 chips for $500 million, Reuters reported.
Volatage Park, an artificial intelligence cloud computing organization funded by billionaire Jed McCaleb, plans to lease computing power for artificial intelligence projects.
The GPU offered by Voltage Park costs as little as $1.89 per GPU per hour. Customers who rent on demand can rent 1 to 8 GPU, while users who want to rent more GPU need to guarantee a certain lease term.
By contrast, Amazon provides on-demand services to users through the P5 nodes of eight H100s, but the price is much more expensive.
In terms of 8-card nodes, AWS charges $98.32 per hour, while Voltage Park charges $15.12 per hour.
Under the craze of artificial intelligence, Nvidia is also ambitious.
The Silicon Valley chip giant hopes to increase production of H100 processors and aims to ship 1.5 million to 2 million units next year, the Financial Times reported.
Due to the popularity of large language models such as ChatGPT, Nvidia's market capitalization soared in May and successfully joined the trillion-dollar club.
As the basic component of developing large-scale language models, GPU has become the object of competition among artificial intelligence companies and even countries around the world.
Saudi Arabia and the United Arab Emirates have purchased thousands of Nvidia H100 processors, according to the Financial Times.
Meanwhile, venture capital firms with rich money are busy buying GPU for start-ups in their portfolios to build their own artificial intelligence models.
Nat Friedman and Daniel Gross, former CEOs of GitHub, supported GitHub, Uber and many other successful startups, buying thousands of GPU and building their own artificial intelligence cloud services.
The system, called Andromeda Cluster (Andromeda Cluster), has 2512 H100 GPU and can train an artificial intelligence model with 65 billion parameters in about 10 days. Although it is not the largest model at present, it is also considerable.
Although only startups backed by two investors can use these resources. The move was well received.
Jack Clark, co-founder of Anthropic, says individual investors have done more to support computing-intensive start-ups than most governments.
Nvidia sold $10.3 billion worth of data center hardware in the second quarter, compared with $14.5 billion in the third quarter.
For this achievement, Omdia estimated that a Nvidia H100 with a radiator calculated that the average weight of the GPU was more than 3kg (6.6lb), while Nvidia shipped more than 300000 H100s in the second quarter, with a total weight of more than 900t (1.8m lb).
Let's make this 900 tons a little more concrete, which is equivalent to:
4.5 Boeing 747s
Eleven space shuttle orbiters
215827 gallons of water
299 Ford F150
181818 PlayStation 5s
32727 golden retrievers
Some netizens said:
However, some media think that this estimate is not very accurate. Because the Nvidia H100 comes in three different shapes and weights.
The Nvidia H100 PCIe graphics card weighs 1.2kg, while the maximum weight of an OAM module with a heatsink is 2kg.
Assuming that 80 per cent of Nvidia H100 shipments are modules and 20 per cent are graphics cards, the average weight of a single H100 is about 1.84kg.
Anyway, this is an astonishing number. And Nvidia's sales increased significantly in the third quarter, with a total weight of 1000 tons according to 500,000 GPU pieces of 2 kg each.
Now all graphics cards are sold by tons. What do you think of them?
Reference:
Https://www.tomshardware.com/tech-industry/nvidia-ai-and-hpc-gpu-sales-reportedly-approached-half-a-million-units-in-q3-thanks-to-meta-facebook
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.