Built for AI Training

GPU Leases

H100

H200

B200

L40

A100

GB200

Choose the right GPU for you

Cluster training H200

⚪NVIDIA HGX H200 GPU

⚪1128 GB RAM

⚪20 vCPU

8 Card H200 GPU Node

  • Equipped with 141GB of HBM3e memory with 4.8TB/sec bandwidth.
  • Enhanced Tensor Core Architecture and Faster Memory Bandwidth for Large-Scale AI Deployments
  • In the case of the full-blooded version of DeepSeek, for example, the inference throughput of a single 8-card H200 is expected to improve by about 30% or so over a 16-card H100.
  • Bare metal delivery

Cluster Training H100

⚪NVIDIA HGX H100 GPU

⚪256GB RAM

⚪20 vCPU

8 Card H100 GPU Node

  • Supports NVlink for multi-card training
  • 25Gbps private network bandwidth
  • Public network bandwidth 10Gbps or more
  • Supports virtual machine or bare metal provisioning

Cluster AMD MI300X

⚪AMD MI300X

⚪1536 GB

                                      ⚪Intel® Xeon® Platinum 8568Y

8 Card AMD MI300X GPU Node

  • 5.3 TB/s of HBM3 memory bandwidth
  • 2048 GiB memory
  • 6 TiB NVMe storage
  • Bare metal offering

Cluster AMD MI300X

⚪NVIDIA A100 SXM4 GPU

⚪90 GB RAM

⚪12 vCPU

8 Card AMD MI300X GPU Node

  • NVIDIA A100 SXM4 GPUs*8, 80GB*8 640GB HBM2 Memory
  • Supports NVlink for multi-card training
  • 10Gbps private network bandwidth
  • Peak maximum bandwidth of 10Gbps on the public network, with 2Gbps bandwidth guaranteed

Inference

A100-80G

90 GB RAM
12 vCPU

A6000

45 GB RAM
8 vCPU

V100

16 GB RAM
30 GB RAM
8 vCPU

A5000

45 GB RAM
8 vCPU

A4000

45 GB RAM
8 vCPU

Compare these GPUs

GPU Architecture FP16 Performance FP32 Performance Memory Capacity Memory Type Bandwidth
A100
Ampere
312 TFLOPS
19.5 TFLOPS
80 GB
HBM2
2,039 GB/s
V100
Volta
125 TFLOPS
15.7 TFLOPS
32 GB
HBM2
900 GB/s
A6000
Ampere
77.4 TFLOPS
38.7 TFLOPS
48 GB
GDDR6
768 GB/s
Ampere
Ampere
54.2 TFLOPS
27.8 TFLOPS
24 GB
GDDR6
768 GB/s
A4000
Ampere
19.17 TFLOPS
19.17 TFLOPS
16 GB
GDDR6
448 GB/s

Quick Start

Quotation consultation, program design. We have certified product specialists ready to help.