Cerebras

Cerebras

InferenceFREE TIER

Wafer-scale chip inference — 1,000+ tokens/sec

Operational

All systems responding normally

Last checked 06/04/2026, 12:45:20 am

87ms response

Uptime History100.00% uptime
2026-04-03Today

Uptime

100.00%

Avg Latency

286ms

P95 Latency

369ms

Fastest

87ms

Checks

150

Response Time

Last 60 checks
87ms min286ms avg982ms max

💰 Pricing

llama-3.3-70bFREE
Input: $0.6/1MOutput: $0.6/1M

1000+ tokens/sec. llama-3.1-8b: $0.10/$0.10

Free tier available

⚡ Rate Limits

free
RPM: 30TPM: 60,000

🤖 Models (1)

ModelTaskContextVisionToolsJSON
Llama 3.3 70B

1000+ tokens/sec sustained

llm128k

Recent Checks

Showing last 15
Operational
87ms6 Apr, 12:45 am
Operational
220ms6 Apr, 12:23 am
Operational
223ms6 Apr, 12:03 am
Operational
181ms5 Apr, 11:42 pm
Operational
175ms5 Apr, 11:18 pm
Operational
351ms5 Apr, 10:48 pm
Operational
175ms5 Apr, 10:23 pm
Operational
323ms5 Apr, 10:00 pm
Operational
223ms5 Apr, 09:49 pm
Operational
286ms5 Apr, 09:31 pm
Operational
298ms5 Apr, 09:14 pm
Operational
330ms5 Apr, 08:56 pm
Operational
252ms5 Apr, 08:38 pm
Operational
188ms5 Apr, 08:20 pm
Operational
359ms5 Apr, 08:03 pm