NVIDIA 120GB HBM2e video memory version of Hopper H100 accelerator card exposure

NVIDIA previously launched the Hopper H100 GPU, including two versions, one for SXM5 and one for PCIe, with the same memory capacity of 80 GB, but the former uses the new HBM3 standard, and then Those who use the HBM2e standard.

Now, according to s-ss.cc, NVIDIA may be working on a brand new PCIe version of the Hopper H100 GPU. On top of that, the new graphics card may not come with 80GB HBM2e, but 120GB HBM2e memory.

As you can see from the picture below, he got an ADLCE engineering sample card. We have no further news about this card, but the H100 GPU with 120GB of video memory can already make people look forward to it.

The new card should be the same as the previous version, including the full blood GH100 GPU, 16896 CUDA, and the memory bandwidth will reach 3TB/S, which is the same as the H100 core and performance of the SXM interface version.

The whistleblower pointed out that the single-precision performance of this H100 120GB PCIE version is the same as the SXM version, and the single-precision floating-point performance is about 60TFLOPS.

The full specifications of the GH100 GPU are as follows:

  • 8 GPCs, 72 TPCs (9 TPCs/GPCs), 2 SMs/TPCs, 144 SMs per full GPU
  • 128 FP32 CUDA cores per SM, 18432 FP32 CUDA cores per full GPU
  • 4 Gen 4 Tensor Cores per SM, 576 per full GPU
  • 6 HBM3 or HBM2e stacks, 12 512-bit memory controllers
  • 60 MB L2 cache

In addition, regarding the ADLCE engineering sample card, this should be the ES engineering sample of the RTX4090, but the TDP is limited to 350W, so the single-precision performance is only more than 60 TFLOPS.

Released in April 2022, the H100 consists of 80 billion transistors and features many groundbreaking technologies, including a powerful new Transformer engine and NVIDIA NVLink interconnect technology to accelerate the largest AI models, such as advanced recommender systems and large language models, and drive innovation in areas such as conversational AI and drug discovery.

According to Nvidia, the H100 enables enterprises to cut the cost of deploying AI, delivering the same AI performance with a 3.5x improvement in energy efficiency compared to the previous generation, reducing the total cost of ownership to 1/3, the number of server nodes used Also reduced to 1/5.

NVIDIA DGX H100 systems are also now accepting customer preorders. The system includes 8 H100 GPUs with a peak performance of 32 PFlops at FP8 precision. Each DGX system includes NVIDIA Base Command and NVIDIA AI Enterprise software, enabling cluster deployments from a single node to NVIDIA DGX SuperPODs, supporting advanced AI development for large language models and other large-scale workloads.

H100-powered systems from the world's leading computer manufacturers are expected to ship in the next few weeks, with more than 50 server models arriving by the end of this year and dozens more in the first half of 2023. Partners already building systems include Atos, Cisco, Dell Technologies, Fujitsu, Gigabyte, HPE, Lenovo and Supermicro.

Post a Comment

0 Comments