Lambda Labs
InferenceGPU cloud for AI — inference, fine-tuning, training
Operational
All systems responding normally
Last checked 09/04/2026, 6:41:55 pm
611ms response
Uptime
100.00%
Avg Latency
682ms
P95 Latency
845ms
Fastest
464ms
Checks
150
Response Time
Last 60 checks💰 Pricing
A100 80GB: $1.99/hr. H100 80GB: $2.49/hr. API from $0.20/M tokens
⚡ Rate Limits
GPU cloud. API rate limits depend on deployed model. No published limits.
🤖 Models (1)
| Model | Task | Context | Vision | Tools | JSON |
|---|---|---|---|---|---|
| GPU Cloud GPU rentals for inference/training. A100/H100. | llm | — | — | — | — |
Recent Checks
Showing last 15Other Inference Providers
Groq
LPU inference — fastest tokens per second on the market
Cerebras
Wafer-scale chip inference — 1,000+ tokens/sec
Together AI
Open-source model inference — Llama, Mixtral, FLUX
Fireworks AI
Fast open-model inference — FireFunction, Llama, Mixtral
OpenRouter
Unified API across 200+ models — route by price or speed
Hugging Face
Serverless inference API — 100k+ open models on demand