
NVIDIA H100 NVL NVH100NVLTCGPU-KIT Tensor Core GPU
The NVIDIA H100 NVL Tensor Core GPU is built for AI, deep learning, and high-performance computing at scale. Designed for data centers and enterprise environments, it delivers up to 5X faster performance on Llama 2 70B compared to NVIDIA A100 systems, with improved power efficiency and reduced latency. With 1.5X higher throughput than the H100 PCIe, the H100 NVL offers top-tier performance across large AI models and intensive training tasks. It combines NVLink architecture, enhanced memory bandwidth, and high compute density, making it the most capable GPU in the H100 series for large-scale inference and training workflows. FP64 -30 teraFLOPS FP64 Tensor Core - 60 teraFLOPS FP32 - 60 teraFLOPS TF32 Tensor Core - 835 teraFLOPS BFLOAT16 Tensor Core - 1,671 teraFLOPS FP16 Tensor Core - 1,671 teraFLOPS FP8 Tensor Core - 3,341 teraFLOPS INT8 Tensor Core - 3,341 teraFLOPS GPU Memory - 94GB GPU Memory Bandwidth - 3.9TB/s Decoders 7 NVDEC / 7 JPEG Max Thermal Design Power (TDP)- 350-400W (con