Mistral AI
LLMFREE TIERMistral Large, Codestral — European frontier models
Operational
All systems responding normally
Last checked 25/04/2026, 5:17:04 pm
306ms response
Uptime History100.00% uptime
2026-04-21Today
Uptime
100.00%
Avg Latency
309ms
P95 Latency
381ms
Fastest
176ms
Checks
150
Response Time
Last 60 checks176ms min309ms avg514ms max
💰 Pricing
mistral-large-2
Input: $2/1MOutput: $6/1M
mistral-small: $0.10/$0.30
⚡ Rate Limits
free
RPM: 1TPM: 500,000
Experimental access only. Very strict.
premier
RPM: 60
Paid plan. Higher limits on request.
🤖 Models (3)
| Model | Task | Context | Vision | Tools | JSON |
|---|---|---|---|---|---|
| Mistral Large 2 | llm | 131k | — | ✅ | ✅ |
| Codestral Code specialist. Best context for code tasks | code | 256k | — | ✅ | ✅ |
| Mistral Small $0.10/$0.30 per 1M | llm | 33k | — | ✅ | ✅ |
Recent Checks
Showing last 15Operational
306ms25 Apr, 05:17 pm
Operational
281ms25 Apr, 04:45 pm
Operational
323ms25 Apr, 04:07 pm
Operational
245ms25 Apr, 03:29 pm
Operational
298ms25 Apr, 02:45 pm
Operational
316ms25 Apr, 01:50 pm
Operational
292ms25 Apr, 12:46 pm
Operational
319ms25 Apr, 11:42 am
Operational
318ms25 Apr, 10:58 am
Operational
374ms25 Apr, 10:04 am
Operational
317ms25 Apr, 09:46 am
Operational
315ms25 Apr, 09:25 am
Operational
320ms25 Apr, 09:04 am
Operational
317ms25 Apr, 08:46 am
Operational
310ms25 Apr, 08:22 am
Other LLM Providers
OpenAI
GPT-4o, o3, o1 — the benchmark everyone chases
Anthropic
Claude — safety-first reasoning and long context
DeepSeek
DeepSeek V3, R1 — elite reasoning at fraction of cost
xAI Grok
Grok-3 — real-time web access, strong reasoning
AWS Bedrock
Managed foundation models — Claude, Llama, Titan on AWS
Azure OpenAI
OpenAI models on Microsoft Azure — enterprise SLA