Cerebras
InferenceFREE TIERWafer-scale chip inference — 1,000+ tokens/sec
Operational
All systems responding normally
Last checked 01/05/2026, 8:49:30 pm
301ms response
Uptime History98.00% uptime
2026-04-26Today
Uptime
98.00%
Avg Latency
307ms
P95 Latency
352ms
Fastest
92ms
Checks
150
Response Time
Last 60 checks92ms min307ms avg7096ms max
💰 Pricing
llama-3.3-70bFREE
Input: $0.6/1MOutput: $0.6/1M
1000+ tokens/sec. llama-3.1-8b: $0.10/$0.10
Free tier available
⚡ Rate Limits
free
RPM: 30TPM: 60,000
🤖 Models (1)
| Model | Task | Context | Vision | Tools | JSON |
|---|---|---|---|---|---|
| Llama 3.3 70B 1000+ tokens/sec sustained | llm | 128k | — | ✅ | ✅ |
Recent Checks
Showing last 15Operational
301ms1 May, 08:49 pm
Operational
265ms1 May, 08:11 pm
Operational
177ms1 May, 07:36 pm
Outage
1225ms1 May, 06:51 pm
Outage
348ms1 May, 06:03 pm
Operational
92ms1 May, 05:15 pm
Operational
113ms1 May, 04:17 pm
Operational
216ms1 May, 03:04 pm
Operational
113ms1 May, 01:46 pm
Operational
3206ms1 May, 12:24 pm
Operational
211ms1 May, 11:27 am
Unknown
—1 May, 10:48 am
Operational
232ms1 May, 09:53 am
Operational
312ms1 May, 09:29 am
Operational
119ms1 May, 09:00 am
Other Inference Providers
Groq
LPU inference — fastest tokens per second on the market
Together AI
Open-source model inference — Llama, Mixtral, FLUX
Fireworks AI
Fast open-model inference — FireFunction, Llama, Mixtral
OpenRouter
Unified API across 200+ models — route by price or speed
Hugging Face
Serverless inference API — 100k+ open models on demand
fal.ai
Ultra-fast image & video model inference for agents