NVIDIA GPU: Difference between revisions
Line 235: | Line 235: | ||
!Memory Bandwidth | !Memory Bandwidth | ||
!TDP | !TDP | ||
!Max. Temp. | |||
!Launch Date | !Launch Date | ||
|- | |- | ||
Line 250: | Line 251: | ||
|3.35TB/s | |3.35TB/s | ||
|700W | |700W | ||
| | |||
|Jan 2023 | |Jan 2023 | ||
|- | |- | ||
Line 265: | Line 267: | ||
|2TB/s | |2TB/s | ||
|300~350W | |300~350W | ||
| | |||
|Jan 2023 | |Jan 2023 | ||
|- | |- | ||
Line 277: | Line 280: | ||
|102 GB/s | |102 GB/s | ||
|238W | |238W | ||
| | |||
|Dec 2008 | |Dec 2008 | ||
|- | |- | ||
Line 289: | Line 293: | ||
|320 GB/s | |320 GB/s | ||
|225W | |225W | ||
| | |||
|May 2012 | |May 2012 | ||
|- | |- | ||
Line 301: | Line 306: | ||
|208 GB/s | |208 GB/s | ||
|225W | |225W | ||
| | |||
|Nov 2012 | |Nov 2012 | ||
|- | |- | ||
Line 313: | Line 319: | ||
|288 GB/s | |288 GB/s | ||
|235W | |235W | ||
| | |||
|Nov 2013 | |Nov 2013 | ||
|- | |- | ||
Line 325: | Line 332: | ||
|480 GB/s | |480 GB/s | ||
|300W | |300W | ||
| | |||
|Nov 2014 | |Nov 2014 | ||
|- | |- | ||
Line 337: | Line 345: | ||
|288 GB/s | |288 GB/s | ||
|250W | |250W | ||
| | |||
|Nov 2015 | |Nov 2015 | ||
|- | |- | ||
Line 349: | Line 358: | ||
|192 GB/s | |192 GB/s | ||
|75W | |75W | ||
| | |||
|Sep 2016 | |Sep 2016 | ||
|- | |- | ||
Line 361: | Line 371: | ||
|480 GB/s | |480 GB/s | ||
|250W | |250W | ||
| | |||
|Sep 2016 | |Sep 2016 | ||
|- | |- | ||
Line 373: | Line 384: | ||
|900 GB/s | |900 GB/s | ||
|300W | |300W | ||
| | |||
|May 2017 | |May 2017 | ||
|- | |- | ||
Line 382: | Line 394: | ||
| | | | ||
|16 GB | |16 GB | ||
| | |||
| | | | ||
| | | | ||
Line 397: | Line 410: | ||
|1555 GB/s | |1555 GB/s | ||
|250W | |250W | ||
| | |||
|May 2020 | |May 2020 | ||
|- | |- | ||
Line 409: | Line 423: | ||
|1555 GB/s | |1555 GB/s | ||
|400W | |400W | ||
| | |||
|May 2020 | |May 2020 | ||
|- | |- | ||
Line 421: | Line 436: | ||
|696 GB/s | |696 GB/s | ||
|165W | |165W | ||
| | |||
|Apr 2021 | |Apr 2021 | ||
|- | |- | ||
Line 433: | Line 449: | ||
|696 GB/s | |696 GB/s | ||
|300W | |300W | ||
| | |||
|Apr 2021 | |Apr 2021 | ||
|- | |- | ||
Line 445: | Line 462: | ||
|624 GB/s | |624 GB/s | ||
|150W | |150W | ||
| | |||
|Mar 2021 | |Mar 2021 | ||
|- | |- | ||
Line 457: | Line 475: | ||
|800 GB/s | |800 GB/s | ||
|250W | |250W | ||
| | |||
|Mar 2021 | |Mar 2021 | ||
|- | |- | ||
Line 471: | Line 490: | ||
|1935GB/s | |1935GB/s | ||
|300W | |300W | ||
| | |||
|Apr 2021 | |Apr 2021 | ||
|- | |- | ||
Line 485: | Line 505: | ||
|1555 GB/s | |1555 GB/s | ||
|250W | |250W | ||
| | |||
|May 2020 | |May 2020 | ||
|- | |- | ||
Line 497: | Line 518: | ||
|2050 GB/s | |2050 GB/s | ||
|400W | |400W | ||
| | |||
|Nov 2021 | |Nov 2021 | ||
|- | |- | ||
Line 509: | Line 531: | ||
|2050 GB/s | |2050 GB/s | ||
|400W | |400W | ||
| | |||
|Nov 2021 | |Nov 2021 | ||
|- | |- | ||
Line 521: | Line 544: | ||
|768 GB/s | |768 GB/s | ||
|230W | |230W | ||
| | |||
|Apr 2021 | |Apr 2021 | ||
|- | |- | ||
Line 533: | Line 557: | ||
|512 GB/s | |512 GB/s | ||
|140W | |140W | ||
| | |||
|Apr 2021 | |Apr 2021 | ||
|- | |- | ||
Line 542: | Line 567: | ||
| | | | ||
|24 GB G | |24 GB G | ||
| | |||
| | | | ||
| | | | ||
Line 557: | Line 583: | ||
|672 GB/s | |672 GB/s | ||
|280W | |280W | ||
| | |||
|Dec 2018 | |Dec 2018 | ||
|- | |||
|GeForce RTX 4090 | |||
|Ada Lovelace | |||
|16384 | |||
|512 | |||
|Yes, 128 | |||
| | |||
|24 GB GDDR6X | |||
| | |||
|21.2Gbps | |||
|450W | |||
|90°C | |||
| | |||
|- | |||
|GeForce RTX 3090 Ti | |||
|Turing | |||
|10752 | |||
|336 | |||
|84 | |||
| | |||
|24 GB GDDR6X | |||
| | |||
|21.2Gbps | |||
|450W | |||
|93°C<ref>https://hothardware.com/reviews/nvidia-geforce-rtx-4090-gpu-review</ref> | |||
| | |||
|- | |- | ||
|GeForce RTX 3090 | |GeForce RTX 3090 | ||
Line 569: | Line 622: | ||
|936 GB/s | |936 GB/s | ||
|350W | |350W | ||
| | |||
|Sep 2020 | |Sep 2020 | ||
|- | |- | ||
Line 581: | Line 635: | ||
|912 GB/s | |912 GB/s | ||
|350W | |350W | ||
| | |||
|May 2021 | |May 2021 | ||
|- | |- | ||
Line 593: | Line 648: | ||
|760 GB/s | |760 GB/s | ||
|320W | |320W | ||
| | |||
|Sep 2020 | |Sep 2020 | ||
|- | |- | ||
Line 605: | Line 661: | ||
|608 GB/s | |608 GB/s | ||
|290W | |290W | ||
| | |||
|Jun 2021 | |Jun 2021 | ||
|- | |- | ||
Line 617: | Line 674: | ||
|448 GB/s | |448 GB/s | ||
|220W | |220W | ||
| | |||
|Oct 2020 | |Oct 2020 | ||
|- | |- | ||
Line 629: | Line 687: | ||
|448 GB/s | |448 GB/s | ||
|200W | |200W | ||
| | |||
|Dec 2020 | |Dec 2020 | ||
|- | |- | ||
Line 641: | Line 700: | ||
|360 GB/s | |360 GB/s | ||
|170W | |170W | ||
| | |||
|Feb 2021 | |Feb 2021 | ||
|- | |- | ||
Line 653: | Line 713: | ||
|624 GB/s | |624 GB/s | ||
|295W | |295W | ||
| | |||
|Aug 2018 | |Aug 2018 | ||
|- | |- | ||
Line 665: | Line 726: | ||
|432 GB/s | |432 GB/s | ||
|260W | |260W | ||
| | |||
|Aug 2018 | |Aug 2018 | ||
|- | |- | ||
Line 678: | Line 740: | ||
|864GB/s | |864GB/s | ||
|300W | |300W | ||
| | |||
|Jan 2023 | |Jan 2023 | ||
|- | |- | ||
Line 690: | Line 753: | ||
|448 GB/s | |448 GB/s | ||
|230W | |230W | ||
| | |||
|Nov 2018 | |Nov 2018 | ||
|- | |- | ||
Line 702: | Line 766: | ||
|416 GB/s | |416 GB/s | ||
|160W | |160W | ||
| | |||
|Nov 2018 | |Nov 2018 | ||
|- | |- | ||
Line 714: | Line 779: | ||
|672 Gb/s | |672 Gb/s | ||
|280 W | |280 W | ||
| | |||
| | | | ||
|- | |- | ||
Line 726: | Line 792: | ||
|652.8 GB/s | |652.8 GB/s | ||
|250W | |250W | ||
| | |||
|Dec 2017 | |Dec 2017 | ||
|- | |- | ||
Line 738: | Line 805: | ||
|900 GB/s | |900 GB/s | ||
|250W | |250W | ||
| | |||
|June 2017 | |June 2017 | ||
|- | |- | ||
Line 750: | Line 818: | ||
|900 GB/s | |900 GB/s | ||
|300W | |300W | ||
| | |||
|June 2017 | |June 2017 | ||
|- | |- | ||
Line 762: | Line 831: | ||
|870 GB/s | |870 GB/s | ||
|250W | |250W | ||
| | |||
|Mar 2018 | |Mar 2018 | ||
|- | |- | ||
Line 774: | Line 844: | ||
|900 GB/s | |900 GB/s | ||
|300W | |300W | ||
| | |||
|Mar 2018 | |Mar 2018 | ||
|} | |} |
Revision as of 08:47, 22 May 2023
HPCMATE provides all level of GPU model as air-cooling or liquid-cooling version for any type of server or workstation.
GPU Tenser performance notes for RTX 4090
According to this thread NVIDIA looks cut the tensor FP16 & TF32 operation rate in half, resulting in a 4090 with even lower FP16 & TF32 performance than the 4080 16GB. This may have been done to prevent the 4090 from cannibalizing the Quadro/Tesla sales. So if you are choosing GPUs, you can choose the 4090 for memory, but lower tensor performance than the 4080 16GB. eventhough 4090 has more than twice the ray tracing performance of the 4080 12GB.
RTX 4090 | RTX 4080 16GB | RTX 4080 12GB | RTX 3090 Ti | |
---|---|---|---|---|
non-tensor FP32 tflops | 82.6 (206%) | 48.7 (122%) | 40.1 (100%) | 40 (100%) |
non-tensor FP16 tflops | 82.6 (206%) | 48.7 (122%) | 40.1 (100%) | 40 (100%) |
Tensor Cores | 512 (152%) | 304 (90%) | 240 (71%) | 336 (100%) |
Optical flow TOPS | 305 (242%) | 305 (242%) | 305 (242%) | 126 (100%) |
tensor FP16 w/ FP32 accumulate TFLOPS ** | 165.2 (207%) | 194.9 (244%) | 160.4 (200%) | 80 (100%) |
tensor TF32 TFLOPS ** | 82.6 (207%) | 97.5 (244%) | 80.2 (200%) | 40 (100%) |
Ray trace Cores | 128 (152%) | 76 (90%) | 60 (71%) | 84 (100%) |
Ray trace TFLOPS | 191 (245%) | 112.7 (144%) | 92.7 (119%) | 78.1 (100%) |
POWER (W) | 450 (100%) | 320 (71%) | 285 (63%) | 450 (100%) |
NVIDIA GPU Architecture
nvcc sm flags and what they’re used for: When compiling with NVCC[1],
- the arch flag (‘
-arch
‘) specifies the name of the NVIDIA GPU architecture that the CUDA files will be compiled for. - Gencodes (‘
-gencode
‘) allows for more PTX generations and can be repeated many times for different architectures.
Matching CUDA arch and CUDA gencode for various NVIDIA architectures
Series | Architecture
(--arch) |
CUDA gencode
(--sm) |
Compute Capability | Notable Models | Supported CUDA version | Key Features |
---|---|---|---|---|---|---|
Tesla | Tesla | 1.0, 1.1, 2.0, 2.1 | C1060, M2050, K80, P100, V100, A100 | First dedicated GPGPU series | ||
Fermi | Fermi | sm_20 | 3.0, 3.1 | GTX 400, GTX 500, Tesla 20-series, Quadro 4000/5000 | CUDA 3.2 until CUDA 8 | First to feature CUDA cores and support for ECC memory
|
Kepler | Kepler | sm_30
sm_35, sm_37 |
3.2, 3.5, 3.7 | GTX 600, GTX 700, Tesla K-series, Quadro K-series | CUDA 5 until CUDA 10 | First to feature Dynamic Parallelism and Hyper-Q
|
Maxwell | Maxwell | sm_50,
sm_52, sm_53 |
5.0, 5.2 | GTX 900, GTX 1000, Quadro M-series | CUDA 6 until CUDA 11 | First to support VR and 4K displays
|
Pascal | Pascal | sm_60,
sm_61, sm_62 |
6.0, 6.1, 6.2 | GTX 1000, Quadro P-series | CUDA 8 and later | First to support simultaneous multi-projection
|
Volta | Volta | sm_70,
sm_72 (Xavier) |
7.0, 7.2, 7.5 | Titan V, Tesla V100, Quadro GV100 | CUDA 9 and later | First to feature Tensor Cores and NVLink 2.0
|
Turing | Turing | sm_75 | 7.5, 7.6 | RTX 2000, GTX 1600, Quadro RTX | CUDA 10 and later | First to feature Ray Tracing Cores and RTX technology
|
Ampere | Ampere | sm_80,
sm_86, sm_87 (Orin) |
8.0, 8.6 | RTX 3000, A-series | CUDA 11.1 and later | Features third-generation Tensor Cores and more
|
Lovelace | Ada Lovelace[2] | sm_89 | 8.9 | GeForce RTX 4070 Ti (AD104)
GeForce RTX 4080 (AD103) GeForce RTX 4090 (AD102) Nvidia RTX 6000 Ada Generation (AD102, formerly Quadro) Nvidia L40 (AD102, formerly Tesla) |
CUDA 11.8 and later
cuDNN 8.6 and later |
|
Hopper[3] | Hopper | sm_90, sm_90a(Thor) | 9.0 | CUDA 12 and later | TODO
|
NVIDIA GPU Models
Model | Architecture | CUDA Cores | Tensor Cores | RT Cores | FF | Memory Size | MIG[4] | Memory Bandwidth | TDP | Max. Temp. | Launch Date |
---|---|---|---|---|---|---|---|---|---|---|---|
H100-SXM5 | Hopper
(GH100) |
16896 | 4th Gen
528 |
No | SXM5 | 80GB HBM3
50 MB L2 cache |
7@10GB | 3.35TB/s | 700W | Jan 2023 | |
H100-PCIE[5][6] | Hopper
(GH100) |
14592 | 4th Gen 456 | No | PCIe
Gen 5 x16 |
80 GB HBM2
50 MB L2 cache |
7@10GB | 2TB/s | 300~350W | Jan 2023 | |
Tesla C1060 | Tesla | 240 | No | No | 4 GB GDDR3 | 102 GB/s | 238W | Dec 2008 | |||
Tesla K10 | Kepler | 3072 | No | No | 8 GB GDDR5 | 320 GB/s | 225W | May 2012 | |||
Tesla K20 | Kepler | 2496 | No | No | 5/6 GB GDDR5 | 208 GB/s | 225W | Nov 2012 | |||
Tesla K40 | Kepler | 2880 | No | No | 12 GB GDDR5 | 288 GB/s | 235W | Nov 2013 | |||
Tesla K80 | Kepler | 4992 | No | No | 24 GB GDDR5 | 480 GB/s | 300W | Nov 2014 | |||
Tesla M40 | Maxwell | 3072 | No | No | 12 GB GDDR5 | 288 GB/s | 250W | Nov 2015 | |||
Tesla P4 | Pascal | 2560 | No | No | 8 GB GDDR5 | 192 GB/s | 75W | Sep 2016 | |||
Tesla P40 | Pascal | 3840 | No | No | 24 GB GDDR5X | 480 GB/s | 250W | Sep 2016 | |||
Tesla V100 | Volta | 5120 | 640 | Yes | 16/32 GB HBM2 | 900 GB/s | 300W | May 2017 | |||
Tesla T4 | Turing | 2560 | 320 | No | 16 GB | ||||||
A100 PCIe | Ampere (GA100) | 6912 | 432 | Yes | 40 GB HBM2 / 80 GB HBM2 | 1555 GB/s | 250W | May 2020 | |||
A100 SXM4 | Ampere | 6912 | 432 | Yes | 40 GB HBM2 / 80 GB HBM2 | 7 | 1555 GB/s | 400W | May 2020 | ||
A30 | Ampere | 7424 | 184 | No | 24 GB GDDR6 | 4 | 696 GB/s | 165W | Apr 2021 | ||
A40 | Ampere | 10752 | 336 | No | 48 GB GDDR6 | 696 GB/s | 300W | Apr 2021 | |||
A10 | Ampere | 10240 | 320 | No | 24 GB GDDR6 | 624 GB/s | 150W | Mar 2021 | |||
A16[7] | Ampere | 5120 | 3rd Gen 160 | 40 | PCIe Gen4 x16 | 64 GB GDDR6 | 800 GB/s | 250W | Mar 2021 | ||
A100 80GB | Ampere
(GA100) |
6912 | 432 | Yes | 80 GB HBM2 | 7@
10GB |
1935GB/s | 300W | Apr 2021 | ||
A100 40GB | Ampere
(GA100) |
6912 | 432 | Yes | 40 GB HBM2 | 7@
5GB |
1555 GB/s | 250W | May 2020 | ||
A200 PCIe | Ampere | 10752 | 672 | Yes | 80 GB HBM2 / 160 GB HBM2 | 2050 GB/s | 400W | Nov 2021 | |||
A200 SXM4 | Ampere | 10752 | 672 | Yes | 80 GB HBM2 / 160 GB HBM2 | 2050 GB/s | 400W | Nov 2021 | |||
A5000 | Ampere | 8192 | 256 | Yes | 24 GB GDDR6 | 768 GB/s | 230W | Apr 2021 | |||
A4000 | Ampere | 6144 | 192 | Yes | 16 GB GDDR6 | 512 GB/s | 140W | Apr 2021 | |||
A3000 | Ampere | 3584 | 112 | Yes | 24 GB G | ||||||
Titan RTX | Turing | 4608 | 576 | Yes | 24 GB GDDR6 | 672 GB/s | 280W | Dec 2018 | |||
GeForce RTX 4090 | Ada Lovelace | 16384 | 512 | Yes, 128 | 24 GB GDDR6X | 21.2Gbps | 450W | 90°C | |||
GeForce RTX 3090 Ti | Turing | 10752 | 336 | 84 | 24 GB GDDR6X | 21.2Gbps | 450W | 93°C[8] | |||
GeForce RTX 3090 | Turing | 10496 | 328 | Yes | 24 GB GDDR6X | 936 GB/s | 350W | Sep 2020 | |||
GeForce RTX 3080 Ti | Turing | 10240 | 320 | Yes | 12 GB GDDR6X | 912 GB/s | 350W | May 2021 | |||
GeForce RTX 3080 | Turing | 8704 | 272 | Yes | 10 GB GDDR6X | 760 GB/s | 320W | Sep 2020 | |||
GeForce RTX 3070 Ti | Turing | 6144 | 192 | Yes | 8 GB GDDR6X | 608 GB/s | 290W | Jun 2021 | |||
GeForce RTX 3070 | Turing | 5888 | 184 | Yes | 8 GB GDDR6 | 448 GB/s | 220W | Oct 2020 | |||
GeForce RTX 3060 Ti | Turing | 4864 | 152 | Yes | 8 GB GDDR6 | 448 GB/s | 200W | Dec 2020 | |||
GeForce RTX 3060 | Turing | 3584 | 112 | No | 12 GB GDDR6 | 360 GB/s | 170W | Feb 2021 | |||
Quadro RTX 8000 | Turing | 4608 | 576 | Yes | 48 GB GDDR6 | 624 GB/s | 295W | Aug 2018 | |||
Quadro RTX 6000 | Turing | 4608 | 576 | Yes | 24 GB GDDR6 | 432 GB/s | 260W | Aug 2018 | |||
Tesla L40[9] | Ada Lovelace | 18,176 | 4th Gen 568 | 3rd Gen
142 |
PCIe Gen4x1 | 48GB GDDR6 with ECC | 864GB/s | 300W | Jan 2023 | ||
Quadro RTX 5000 | Turing | 3072 | 384 | Yes | 16 GB GDDR6 | 448 GB/s | 230W | Nov 2018 | |||
Quadro RTX 4000 | Turing | 2304 | 288 | Yes | 8 GB GDDR6 | 416 GB/s | 160W | Nov 2018 | |||
Titan RTX (T-Rex) | Turing | 4608 | 576 | No | 24 GB | 672 Gb/s | 280 W | ||||
Titan V | Volta | 5120 | 640 | 12 GB HBM2 | 652.8 GB/s | 250W | Dec 2017 | ||||
Tesla V100 (PCIe) | Volta | 5120 | 640 | No | 32/16 GB HBM2 | 900 GB/s | 250W | June 2017 | |||
Tesla V100 (SXM2) | Volta | 5120 | 640 | No | 32/16 GB HBM2 | 900 GB/s | 300W | June 2017 | |||
Quadro GV100 | Volta | 5120 | 640 | No | 32 GB HBM2 | 870 GB/s | 250W | Mar 2018 | |||
Tesla GV100 (SXM2) | Volta | 5120 | 640 | No | 32 GB HBM2 | 900 GB/s | 300W | Mar 2018 |
NVIDIA Features by Architecture[10]
NVIDIA GPU Architectures | |||||||
---|---|---|---|---|---|---|---|
AD102 | GA102 | GA100 | TU102 | GV100 | GP102 | GP100 | |
Launch Year | 2022 | 2020 | 2020 | 2018 | 2017 | 2017 | – |
Architecture | Ada Lovelace | Ampere | Ampere | Turing | Volta | Pascal | Pascal |
Form Factor | – | – | SXM4/PCIe | – | SXM2/PCIe | – | SXM/PCIe |
TDP | – | – | 400W | – | 300W | – | 300W |
Node | TSMC 4N | SAMSUNG 8N | – | TSMC 12nm | TSMC 12nm | TSMC 16nm | – |
CUDA Cores | 18432 | 10752 | – | 4608 | 5120 | 3840 | – |
Tensor Cores | 576 Gen4 | 336 Gen3 | – | 576 Gen2 | 640 | – | – |
RT Cores | 144 Gen3 | 84 Gen2 | – | 72 Gen1 | – | – | – |
Memory Bus | GDDR6X 384-bit | GDDR6X 384-bit | – | GDDR6 384-bit | HBM2 3072-bit | GDDR6X 384-bit | – |
NVIDIA Grace Architecture
NVIDIA has announced that they will be partnering with server manufacturers such as HPE, Atos, and Supermicro to create servers that integrate the Grace architecture with ARM-based CPUs. These servers are expected to be available in the second half of 2023, by then HPCMATE starts to offer those products through local and global partners.
Architecture | Key Features |
---|---|
Grace | CPU-GPU integration, ARM Neoverse CPU, HBM2E memory |
900 GB/s memory bandwidth, support for PCIe 5.0 and NVLink | |
10x performance improvement for certain HPC workloads | |
Energy efficiency improvements through unified memory space |
Reference
- ↑ https://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/
- ↑ https://en.wikipedia.org/wiki/Ada_Lovelace_(microarchitecture)
- ↑ https://www.nvidia.com/en-us/data-center/h100/
- ↑ https://docs.nvidia.com/datacenter/tesla/mig-user-guide/
- ↑ https://www.nvidia.com/content/dam/en-zz/Solutions/gtcs22/data-center/h100/PB-11133-001_v01.pdf
- ↑ https://resources.nvidia.com/en-us-tensor-core/nvidia-tensor-core-gpu-datasheet
- ↑ https://images.nvidia.com/content/Solutions/data-center/vgpu-a16-datasheet.pdf
- ↑ https://hothardware.com/reviews/nvidia-geforce-rtx-4090-gpu-review
- ↑ https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/support-guide/NVIDIA-L40-Datasheet-January-2023.pdf
- ↑ https://videocardz.com/newz/nvidia-details-ad102-gpu-up-to-18432-cuda-cores-76-3b-transistors-and-608-mm%C2%B2