Highlights
The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale for AI, data analytics, and HPC to tackle the world’s toughest computing challenges. As the engine of the NVIDIA data center platform, A100 can efficiently scale up to thousands of GPUs or, using new Multi-Instance GPU (MIG) technology, can be partitioned into seven isolated GPU instances to accelerate workloads of all sizes. A100’s third-generation Tensor Core technology now accelerates more levels of precision for diverse workloads, speeding time to insight as well as time to market.
Good to know
Processor
-
CUDA
No
-
CUDA cores
6912
-
FireStream
No
-
Graphics processor
A100
-
Graphics processor family
NVIDIA
Memory
-
Discrete graphics card memory
40.000000
-
Graphics card memory type
High Bandwidth Memory 2 (HBM2)
-
Memory bandwidth (max)
1600.000000
-
Memory bus
5120.000000
Ports & interfaces
-
Dual VGA
No
-
Interface type
PCI Express x16 4.0
-
TV-out
No
Performance
-
AMD FreeSync
No
-
Dual DVO
No
-
Dual Link DVI
No
-
Full HD
No
-
HDCP
No