Cerebras

Cerebras

InferenceFREE TIER

Wafer-scale chip inference — 1,000+ tokens/sec

Operational

All systems responding normally

Last checked 11/04/2026, 6:21:41 pm

321ms response

Uptime History98.67% uptime
2026-04-07Today

Uptime

98.67%

Avg Latency

387ms

P95 Latency

496ms

Fastest

184ms

Checks

150

Response Time

Last 60 checks
184ms min387ms avg7135ms max

💰 Pricing

llama-3.3-70bFREE
Input: $0.6/1MOutput: $0.6/1M

1000+ tokens/sec. llama-3.1-8b: $0.10/$0.10

Free tier available

⚡ Rate Limits

free
RPM: 30TPM: 60,000

🤖 Models (1)

ModelTaskContextVisionToolsJSON
Llama 3.3 70B

1000+ tokens/sec sustained

llm128k

Recent Checks

Showing last 15
Operational
321ms11 Apr, 06:21 pm
Operational
358ms11 Apr, 05:58 pm
Operational
271ms11 Apr, 05:38 pm
Operational
291ms11 Apr, 05:14 pm
Operational
281ms11 Apr, 04:41 pm
Operational
358ms11 Apr, 04:12 pm
Operational
320ms11 Apr, 03:47 pm
Operational
325ms11 Apr, 03:19 pm
Operational
346ms11 Apr, 02:42 pm
Operational
275ms11 Apr, 01:52 pm
Operational
213ms11 Apr, 12:50 pm
Operational
374ms11 Apr, 11:48 am
Operational
279ms11 Apr, 11:04 am
Operational
314ms11 Apr, 10:33 am
Operational
382ms11 Apr, 09:53 am