NVIDIA Introduces NVIDIA H200 Tensor Core GPU with HBM3e Memory

NVIDIA (NASDAQ: NVDA) today announced it has supercharged the world's leading AI computing platform with the introduction of the NVIDIA HGX™ H200. Based on NVIDIA Hopper™ architecture, the platform features the NVIDIA H200 Tensor Core GPU with advanced memory to handle massive amounts of data for generative AI and high performance computing workloads.


Key Highlights
  • The NVIDIA H200 is the first GPU to offer HBM3e, faster, larger memory to fuel the acceleration of generative AI and large language models, while advancing scientific computing for HPC workloads.
  • With HBM3e, the NVIDIA H200 delivers 141GB of memory at 4.8 terabytes per second, nearly double the capacity and 2.4x more bandwidth compared with its predecessor, the NVIDIA A100.
  • H200-powered systems from the world's leading server manufacturers and cloud service providers are expected to begin shipping in the second quarter of 2024.
  • The NVIDIA H200 will be available in NVIDIA HGX H200 server boards with four- and eight-way configurations, which are compatible with both the hardware and software of HGX H100 systems.
  • Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure will be among the first cloud service providers to deploy H200-based instances starting next year.
  • The eight-way HGX H200 provides over 32 petaflops of FP8 deep learning compute and 1.1TB of aggregate high-bandwidth memory for the highest performance in generative AI and HPC applications.
  • The NVIDIA H200 will be available from global system manufacturers and cloud service providers starting in the second quarter of 2024.

Post a Comment

0 Comments