Nvidia has launched the Hopper H100, the Fastest AI and Computer Accelerator
Everyone knows that Nvidia is the largest manufacturer of discrete graphics processors and cards for everyday use. However, what is not generally known is that most of the company's revenue comes from using its capacity to produce graphics chips for parallel computer accelerators. Now, this company has presented the latest professional solution for computer and data centers.
The latest announcement that this company made on GTC brings news about its new graphics processor intended for professional use in computer centers - H100 Hopper. According to earlier allegations, it is known that this processor is based on the Hopper architecture, and it is very important for the further development of artificial intelligence.
The new Nvidia H100 is made in the TSMC 4N process and consists of some 80 billion transistors. For example, the RTX 3090 graphics card has 28.3 billion transistors, while the GA100 graphics processor has 54 billion transistors.
So, this solution promises a huge jump in performance. According to Nvidia, the H100 chipset should show thirty times better performance than the previous generation of such graphics processors, writes Hot Hardware.
Switching to HBM3 memory also improves the data transfer rate per processor pin from 2.43Gbps to 4.8Gbps. This is surprising because the memory setting has remained unchanged - five active memory segments for a total of 80GB of memory per GPU, with a total bandwidth of 3TB / sec.
Nvidia also pointed out that the new graphics processor based on the Hopper architecture brings a new set of operating instructions called "DPX". These instructions focus on accelerating dynamic programming, and Nvidia claims that this code manages to enable accelerations of up to 40x compared to thirty-dual-core Ice Lake Xeon processors.
Another important piece of information presented by Nvidia is the compatibility of H100 graphics processors with "all types of data centers" because the H100 will be delivered in various variants, from specialized PCIe cards to pre-configured supercomputers with 256 H100 units.
The base type will be the SXM version H100, while as a PCIe card it will be delivered as a CNX "Converged Accelerator". Essentially, this card will have an H100 chip with an Nvidia ConnectX-7 “Infiniband” dual-port adapter. The GPU will connect to the NIC via PCIe 5, while the NIC will connect via the PCIe 4 connection.
Finally, there is the new DGX system called the DGX H100. This is the fourth generation of the supercomputer module by Nvidia and is very similar to the previous generation DGX A100. In essence, eight A100 GPUs were replaced with eight SXM H100 accelerators for a total of 32 petaFLOPS.
In the end, it is worth saying that these products will be difficult to access and that they are most likely intended for state organizations and research projects. Therefore, the company Nvidia did not present prices for H100 graphics processors, but it is known that they will be available in the third quarter of this year.