TESLA H100

An Order-of-Magnitude Leap for Accelerated Computing
Tap into unprecedented performance, scalability, and security for every workload with the NVIDIA® H100 Tensor Core GPU. With the NVIDIA NVLink® Switch System, up to 256 H100 GPUs can be connected to accelerate exascale workloads. The GPU also includes a dedicated Transformer Engine to solve trillion-parameter language models. The H100’s combined technology innovations can speed up large language models (LLMs) by an incredible 30X over the previous generation to deliver industry-leading conversational AI.

  • DESCRIPTION
  • SPECIFICATION
  • BIOS / DRIVERS UPDATE
  • REQUIREMENT

►Transformational AI Training

►Real-Time Deep Learning Inference

►Exascale High-Performance Computing

►Accelerated Data Analytics

►Enterprise-Ready Utilization

►Built-In Confidential Computing

►Unparalleled Performance for Large-Scale AI and HPC

Peak FP64

24 TFLOPS

Peak FP64 Tensor Core

48 TFLOPS

Peak FP32

48 TFLOPS

Peak TF32 Tensor Core

800 TFLOPS*

Peak BFLOAT16 Tensor Core

1600 TFLOPS*

Peak FP16 Tensor Core

1600 TFLOPS*

Peak FP8 Tensor Core

1600 TOPSLOPS*

Peak INT8 Tensor Core

3200 TOPSLOPS*

GPU Memory

80GB

GPU Memory Bandwidth

2TB/s

Decoders 7 NVDEC
7 JPEG
Max thermal design power (TDP) 350W
Multi-Instance GPUs Up to 7 MIGS @ 10GB each
Form factor PCIe
dual-slot air-cooled
Interconnect NVLink: 600GB/s
PCIe Gen5: 128GB/s
Server options Partner and
NVIDIA-Certified Systems
with 1–8 GPUs