NVIDIA Unveils GB200 NVL4 with Quad Blackwell GPUs & Dual Grace CPUs; H200 NVL Now Generally Available
NVIDIA has announced two powerhouse solutions at SC24 to drive high-performance computing (HPC) and AI workloads: the Blackwell GB200 NVL4 and the now widely available Hopper H200 NVL.
H200 NVL: Flexible AI & HPC Workhorse
Key Features:
PCIe-Based Design: Connects up to four GPUs with an NVLINK domain, delivering 7x faster bandwidth compared to traditional PCIe setups.
HPC & AI Capabilities:
1.5x more HBM memory.
1.7x better LLM inference performance.
1.3x uplift in HPC performance.
Specs:
114 SMs (14,592 CUDA cores).
456 Tensor Cores.
80 GB of HBM2e memory.
TDP: 350W.
Applications: Fits seamlessly into data centers with hybrid AI/HPC configurations.
GB200 NVL4: Blackwell’s Next Evolution
Specifications:
Features 2 Grace CPUs and 4 Blackwell GB200 GPUs on a single server module.
Memory: 1.3TB of coherent memory.
Performance:
2.2x better simulation capabilities.
1.8x improvement in Training and Inference workloads.
Power: Estimated 6KW TDP, a testament to its massive computational capability.
Advantages:
Doubles the CPU/GPU configuration of the original GB200 solution.
Increased memory bandwidth for accelerated AI and HPC applications.
Ideal for enterprises needing cutting-edge performance for simulation, AI training, and inference tasks.
NVIDIA’s Future Vision
NVIDIA’s advancements continue to dominate the AI hardware sector:
Recent MLPerf v4.1 world records in both training and inference highlight the power of Hopper and Blackwell architectures.
Accelerating its AI roadmap to an annual cadence, with upcoming infrastructures like Blackwell Ultra and Rubin.