Anthropic
LLMFREE TIERClaude — safety-first reasoning and long context
Operational
All systems responding normally
Last checked 11/04/2026, 6:21:41 pm
784ms response
Uptime History94.67% uptime
2026-04-07Today
Uptime
94.67%
Avg Latency
667ms
P95 Latency
784ms
Fastest
216ms
Checks
150
Response Time
Last 60 checks216ms min667ms avg7411ms max
💰 Pricing
claude-sonnet-4-5
Input: $3/1MOutput: $15/1M
claude-haiku-3-5: $0.80/$4.00
⚡ Rate Limits
free
RPM: 5TPM: 25,000Concurrent: 5
tier1
RPM: 50TPM: 50,000Concurrent: 5
After $5 spend.
tier4
RPM: 4,000TPM: 400,000
After $1000 spend.
🤖 Models (3)
| Model | Task | Context | Vision | Tools | JSON |
|---|---|---|---|---|---|
| Claude Sonnet 4.5 | llm | 200k | ✅ | ✅ | ✅ |
| Claude Opus 4.5 Most capable Anthropic model. $15/$75 per 1M | llm | 200k | ✅ | ✅ | ✅ |
| Claude Haiku 3.5 Fastest/cheapest Claude | llm | 200k | ✅ | ✅ | ✅ |
Recent Checks
Showing last 15Operational
784ms11 Apr, 06:21 pm
Operational
630ms11 Apr, 05:58 pm
Operational
475ms11 Apr, 05:38 pm
Operational
388ms11 Apr, 05:14 pm
Operational
629ms11 Apr, 04:41 pm
Operational
652ms11 Apr, 04:12 pm
Operational
680ms11 Apr, 03:47 pm
Operational
585ms11 Apr, 03:19 pm
Operational
665ms11 Apr, 02:42 pm
Operational
702ms11 Apr, 01:52 pm
Operational
722ms11 Apr, 12:50 pm
Operational
456ms11 Apr, 11:48 am
Operational
614ms11 Apr, 11:04 am
Operational
585ms11 Apr, 10:33 am
Operational
689ms11 Apr, 09:53 am
Other LLM Providers
OpenAI
GPT-4o, o3, o1 — the benchmark everyone chases
DeepSeek
DeepSeek V3, R1 — elite reasoning at fraction of cost
Mistral AI
Mistral Large, Codestral — European frontier models
xAI Grok
Grok-3 — real-time web access, strong reasoning
AWS Bedrock
Managed foundation models — Claude, Llama, Titan on AWS
Azure OpenAI
OpenAI models on Microsoft Azure — enterprise SLA