Live GPU pricing from 20+ providers  ·  Free to use
GPUHunt/Blog/Cheapest H100 Cloud Rental in 2025: Full Price Comparison
H100Price ComparisonCloud GPULLM Training

Cheapest H100 Cloud Rental in 2025: Full Price Comparison

March 20, 2026·6 min read

The NVIDIA H100 is the gold standard for LLM training and high-throughput inference. But H100 pricing varies dramatically across providers — from $2.29/hr to $5.00/hr for comparable specs. Picking the wrong provider for a week-long training run can cost you $500–$1,500 extra. Here's the current price landscape, updated regularly.

H100 Pricing Across Providers (2025)

ProviderH100 VariantPrice/hr (1 GPU)Price/hr (8 GPU)Notes
HyperstackH100 NVL 94GB$2.29/hr$18.32/hrEU-based, strong NVLink
Lambda LabsH100 SXM5 80GB$2.49/hr$19.92/hrBest reliability, 99.9% SLA
CoreWeaveH100 SXM5 80GB$2.79/hr$22.32/hrEnterprise focus, InfiniBand clusters
RunPod On-DemandH100 SXM5 80GB$2.79/hr$22.32/hrGood developer experience
RunPod SpotH100 SXM5 80GB$1.20–2.10/hr$9.60–16.80/hrInterruptible, 40–60% off
Vast.aiH100 SXM5 80GB$1.80–2.80/hr$14.40–22.40/hrMarketplace, varies by host
FluidStackH100 NVL 94GB$2.39/hr$19.12/hrEuropean provider
DataCrunchH100 SXM5 80GB$2.49/hr$19.92/hrEuropean, ISO 27001 certified
💡 H100 NVL vs H100 SXM5: The NVL variant has 94 GB of HBM3e (vs 80 GB for SXM5) and supports NVLink 4.0 with 900 GB/s all-reduce bandwidth. For multi-GPU tensor-parallel training, the NVL is superior. For single-GPU workloads, the 80GB SXM5 is equivalent per FLOP.

Total Cost for Common H100 Workloads

WorkloadGPU ConfigRuntimeCost at $2.49/hrCost at $1.50/hr (spot)
Fine-tune LLaMA 3 70B (QLoRA)1× H1006–8 hrs$15–$20$9–$12
Pre-train 7B model to 100B tokens8× H100~4 days$1,900$1,150
Fine-tune LLaMA 3 70B (full)8× H1002–3 days$960–$1,440$575–$860
Production inference API (24/7)1× H1001 month$1,793/monthNot recommended (spot)
Benchmark / experiment (2hrs)1× H1002 hrs$5.00$3.00

When to Use Spot vs On-Demand H100

H100 spot instances on RunPod and Vast.ai offer 40–60% discounts over on-demand, at the cost of potential interruption. The rule of thumb: use spot for any workload with automatic checkpointing (Axolotl, HuggingFace Trainer, DeepSpeed all support this). Use on-demand for production inference APIs, time-sensitive experiments, and any job where interruption would mean re-running from scratch.

What About Reserved / Committed H100 Pricing?

If you're running H100s continuously or near-continuously, committed contracts cut costs by 30–50%. Lambda Labs offers 1-month reserved H100 at roughly $1,800/month (vs $1,793/month on-demand — essentially the same). CoreWeave and Hyperstack offer 3-month and 12-month contracts with meaningful discounts. At 6+ months of continuous use, reserved pricing on CoreWeave or Hyperstack drops to ~$1.50–$1.80/hr equivalent.

Which Provider Should You Use?

  • Best price (spot): RunPod or Vast.ai — H100 from $1.20/hr with checkpointing
  • Best price (on-demand): Hyperstack at $2.29/hr or Lambda Labs at $2.49/hr
  • Best reliability: Lambda Labs — 99.9% SLA, purpose-built AI infrastructure
  • Best for EU data residency: Hyperstack (Iceland/Netherlands) or DataCrunch (Finland)
  • Best for large clusters (32–256 GPUs): CoreWeave — InfiniBand fabric, enterprise SLAs
  • Best for one-off experiments: RunPod — easiest signup, fastest instance provisioning
See live H100 prices across all providers, updated dailyCompare H100 Prices →

Will H100 Prices Drop?

H100 prices have declined roughly 15–25% over the past 12 months as supply from NVIDIA increased and providers expanded capacity. The H200 and Blackwell B200 are entering the market in 2025–2026, which will put further downward pressure on H100 pricing. If your training run is flexible, waiting 3–6 months for lower H100 prices or better B200 availability could save 20–30%.