Anthropic
LLMFREE TIERClaude — safety-first reasoning and long context
Operational
All systems responding normally
Last checked 01/05/2026, 8:49:30 pm
474ms response
Uptime History97.33% uptime
2026-04-26Today
Uptime
97.33%
Avg Latency
600ms
P95 Latency
713ms
Fastest
336ms
Checks
150
Response Time
Last 60 checks336ms min600ms avg5528ms max
💰 Pricing
claude-sonnet-4-5
Input: $3/1MOutput: $15/1M
claude-haiku-3-5: $0.80/$4.00
⚡ Rate Limits
free
RPM: 5TPM: 25,000Concurrent: 5
tier1
RPM: 50TPM: 50,000Concurrent: 5
After $5 spend.
tier4
RPM: 4,000TPM: 400,000
After $1000 spend.
🤖 Models (3)
| Model | Task | Context | Vision | Tools | JSON |
|---|---|---|---|---|---|
| Claude Sonnet 4.5 | llm | 200k | ✅ | ✅ | ✅ |
| Claude Opus 4.5 Most capable Anthropic model. $15/$75 per 1M | llm | 200k | ✅ | ✅ | ✅ |
| Claude Haiku 3.5 Fastest/cheapest Claude | llm | 200k | ✅ | ✅ | ✅ |
Recent Checks
Showing last 15Operational
474ms1 May, 08:49 pm
Operational
662ms1 May, 08:11 pm
Operational
692ms1 May, 07:36 pm
Operational
584ms1 May, 06:51 pm
Operational
645ms1 May, 06:03 pm
Operational
359ms1 May, 05:15 pm
Operational
612ms1 May, 04:17 pm
Operational
524ms1 May, 03:04 pm
Operational
381ms1 May, 01:46 pm
Operational
511ms1 May, 12:24 pm
Operational
473ms1 May, 11:27 am
Operational
605ms1 May, 10:48 am
Operational
461ms1 May, 09:53 am
Operational
658ms1 May, 09:29 am
Operational
585ms1 May, 09:00 am
Other LLM Providers
OpenAI
GPT-4o, o3, o1 — the benchmark everyone chases
DeepSeek
DeepSeek V3, R1 — elite reasoning at fraction of cost
Mistral AI
Mistral Large, Codestral — European frontier models
xAI Grok
Grok-3 — real-time web access, strong reasoning
AWS Bedrock
Managed foundation models — Claude, Llama, Titan on AWS
Azure OpenAI
OpenAI models on Microsoft Azure — enterprise SLA