AWS GPU Instances
This table is generated by transform_gpus.py
in GitHub, with data from the Instances codebase.
For more detailed information about matching CUDA compute capability, CUDA gencode, and ML framework version for various NVIDIA architectures, please see this up-to-date resource. The NVIDIA documentation also explains compute capability.
GPU Instance | Model | Compute Capability | GPU Count | CUDA Cores | Memory |
---|---|---|---|---|---|
g2.2xlarge | NVIDIA GRID K520 | 3.0 | 1 | 3072 | 4 |
g2.8xlarge | NVIDIA GRID K520 | 3.0 | 4 | 6144 | 16 |
g3s.xlarge | NVIDIA Tesla M60 | 5.2 | 1 | 2048 | 8 |
g3.4xlarge | NVIDIA Tesla M60 | 5.2 | 1 | 2048 | 8 |
g3.8xlarge | NVIDIA Tesla M60 | 5.2 | 2 | 4096 | 16 |
g3.16xlarge | NVIDIA Tesla M60 | 5.2 | 4 | 8192 | 32 |
g4dn.xlarge | NVIDIA T4 Tensor Core | 7.5 | 1 | 2560 | 16 |
g4dn.2xlarge | NVIDIA T4 Tensor Core | 7.5 | 1 | 2560 | 16 |
g4dn.4xlarge | NVIDIA T4 Tensor Core | 7.5 | 1 | 2560 | 16 |
g4dn.8xlarge | NVIDIA T4 Tensor Core | 7.5 | 1 | 2560 | 16 |
g4dn.16xlarge | NVIDIA T4 Tensor Core | 7.5 | 1 | 2560 | 16 |
g4dn.12xlarge | NVIDIA T4 Tensor Core | 7.5 | 4 | 10240 | 64 |
g4dn.metal | NVIDIA T4 Tensor Core | 7.5 | 8 | 20480 | 128 |
p2.xlarge | NVIDIA Tesla K80 | 3.7 | 1 | 2496 | 12 |
p2.8xlarge | NVIDIA Tesla K80 | 3.7 | 4 | 19968 | 96 |
p2.16xlarge | NVIDIA Tesla K80 | 3.7 | 8 | 39936 | 192 |
p3.2xlarge | NVIDIA Tesla V100 | 7.0 | 1 | 5120 | 16 |
p3.8xlarge | NVIDIA Tesla V100 | 7.0 | 4 | 20480 | 64 |
p3.16xlarge | NVIDIA Tesla V100 | 7.0 | 8 | 40960 | 128 |
p3dn.24xlarge | NVIDIA Tesla V100 | 7.0 | 8 | 40960 | 256 |
g5.xlarge | NVIDIA A10G | 7.5 | 1 | 9616 | 24 |
g5.2xlarge | NVIDIA A10G | 7.5 | 1 | 9616 | 24 |
g5.4xlarge | NVIDIA A10G | 7.5 | 1 | 9616 | 24 |
g5.8xlarge | NVIDIA A10G | 7.5 | 1 | 9616 | 24 |
g5.16xlarge | NVIDIA A10G | 7.5 | 1 | 9616 | 24 |
g5.12xlarge | NVIDIA A10G | 7.5 | 4 | 38464 | 96 |
g5.24xlarge | NVIDIA A10G | 7.5 | 4 | 38464 | 96 |
g5.48xlarge | NVIDIA A10G | 7.5 | 8 | 76928 | 192 |
p4d.24xlarge | NVIDIA A100 | 8.0 | 8 | 55296 | 320 |
p4de.24xlarge | NVIDIA A100 | 8.0 | 8 | 55296 | 640 |
g5g.xlarge | NVIDIA T4G Tensor Core | 7.5 | 1 | 2560 | 16 |
g5g.2xlarge | NVIDIA T4G Tensor Core | 7.5 | 1 | 2560 | 16 |
g5g.4xlarge | NVIDIA T4G Tensor Core | 7.5 | 1 | 2560 | 16 |
g5g.8xlarge | NVIDIA T4G Tensor Core | 7.5 | 1 | 2560 | 16 |
g5g.16xlarge | NVIDIA T4G Tensor Core | 7.5 | 2 | 5120 | 32 |
g5g.metal | NVIDIA T4G Tensor Core | 7.5 | 2 | 5120 | 32 |
g4ad.xlarge | AMD Radeon Pro V520 | 0 | 1 | - | 8 |
g4ad.2xlarge | AMD Radeon Pro V520 | 0 | 1 | - | 8 |
g4ad.4xlarge | AMD Radeon Pro V520 | 0 | 1 | - | 8 |
g4ad.8xlarge | AMD Radeon Pro V520 | 0 | 2 | - | 16 |
g4ad.16xlarge | AMD Radeon Pro V520 | 0 | 4 | - | 32 |
Contribute
Contribute to this page on GitHub or join the #cloud-costs-handbook
channel in the Vantage Community Slack.