On demand GPU pricing
Model | VRAM (GB) | Max vCPUs per GPU | Max RAM (GB) per GPU | On-Demand Price (/hr) |
---|---|---|---|---|
Nvidia H200 | ||||
Nvidia H100 SXM | ||||
Nvidia H100 PCIe | ||||
Nvidia A100 80GB PCIe | ||||
Nvidia L40S |
Volume discounts starting at 8+ GPUs.
Our reserved clusters are designed for large-scale training and inference, offering industry-leading turnaround times and unbeatable pricing.
24/7 MLOps support.
With a 15-minute response time, advanced monitoring, and automated remediation.
Fully managed K8s or Slurm.
So you don't have to worry about complex infrastructure and can focus on your models.
Starting at at $1.94/hr.
For 12+ month commitments of large, InfiniBand connected H100 clusters.
Large scale GPU clusters
Designed for large scale training and inference, deployed on our fully managed cloud infrastructure.
GPU Count
8 – 10K+
Term
30 days or longer
On-demand GPU instances
Launch GPU instances in under 5 minutes, and seamlessly scale to 100s of GPUs on-demand.
GPU Count
1 - 100+
Term
by the hour
Trusted by top AI companies
Get started now
We can provision thousands of high-demand GPUs in record times, at the best prices. Skip the waitlist and start training now.


