Baseten
InferenceProduction ML model deployment — custom + open models
Operational
All systems responding normally
Last checked 09/04/2026, 6:41:55 pm
401ms response
Uptime History98.00% uptime
2026-04-05Today
Uptime
98.00%
Avg Latency
487ms
P95 Latency
614ms
Fastest
348ms
Checks
150
Response Time
Last 60 checks348ms min487ms avg2645ms max
💰 Pricing
custom
A10G: $0.31/hr. A100: $2.87/hr. Dedicated deployment.
⚡ Rate Limits
free
RPM: 10TPM: 1,000
Free plan: 10 RPM, 1K TPM. Paid: up to 1000 RPM.
🤖 Models (1)
| Model | Task | Context | Vision | Tools | JSON |
|---|---|---|---|---|---|
| Custom Deployment Deploy any model. A10G/A100 GPUs. | llm | — | — | — | — |
Recent Checks
Showing last 15Operational
401ms9 Apr, 06:41 pm
Operational
382ms9 Apr, 05:56 pm
Unknown
—9 Apr, 05:18 pm
Operational
555ms9 Apr, 04:44 pm
Operational
424ms9 Apr, 03:57 pm
Operational
530ms9 Apr, 03:17 pm
Operational
497ms9 Apr, 02:25 pm
Operational
403ms9 Apr, 01:29 pm
Operational
459ms9 Apr, 12:19 pm
Operational
366ms9 Apr, 11:23 am
Operational
461ms9 Apr, 10:48 am
Operational
467ms9 Apr, 10:01 am
Operational
402ms9 Apr, 09:46 am
Operational
586ms9 Apr, 09:20 am
Operational
501ms9 Apr, 08:58 am
Other Inference Providers
Groq
LPU inference — fastest tokens per second on the market
Cerebras
Wafer-scale chip inference — 1,000+ tokens/sec
Together AI
Open-source model inference — Llama, Mixtral, FLUX
Fireworks AI
Fast open-model inference — FireFunction, Llama, Mixtral
OpenRouter
Unified API across 200+ models — route by price or speed
Hugging Face
Serverless inference API — 100k+ open models on demand