Leveraging the power of third-generation Tensor Cores, HGX A100 delivers up to a 20X speedup to AI out of the box with Tensor Float 32 (TF32) and a 2.5X speedup to HPC with FP64. NVIDIA HGX A100 4-GPU delivers nearly 80 teraFLOPS of FP64 for the most demanding HPC workloads. NVIDIA HGX A100 8-GPU provides 5 petaFLOPS of FP16 deep learning compute, while the 16-GPU HGX A100 delivers a staggering 10 petaFLOPS, creating the world’s most powerful accelerated scale-up server platform for AI and HPC.