Mistral AI
LLMFREE TIERMistral Large, Codestral — European frontier models
Operational
All systems responding normally
Last checked 16/05/2026, 2:15:23 am
272ms response
Uptime History100.00% uptime
2026-05-10Today
Uptime
100.00%
Avg Latency
288ms
P95 Latency
343ms
Fastest
159ms
Checks
150
Response Time
Last 60 checks159ms min288ms avg403ms max
💰 Pricing
mistral-large-2
Input: $2/1MOutput: $6/1M
mistral-small: $0.10/$0.30
⚡ Rate Limits
free
RPM: 1TPM: 500,000
Experimental access only. Very strict.
premier
RPM: 60
Paid plan. Higher limits on request.
🤖 Models (3)
| Model | Task | Context | Vision | Tools | JSON |
|---|---|---|---|---|---|
| Mistral Large 2 | llm | 131k | — | ✅ | ✅ |
| Codestral Code specialist. Best context for code tasks | code | 256k | — | ✅ | ✅ |
| Mistral Small $0.10/$0.30 per 1M | llm | 33k | — | ✅ | ✅ |
Recent Checks
Showing last 15Operational
272ms16 May, 02:15 am
Operational
325ms16 May, 01:28 am
Operational
307ms16 May, 12:37 am
Operational
253ms15 May, 11:42 pm
Operational
316ms15 May, 11:00 pm
Operational
284ms15 May, 10:13 pm
Operational
270ms15 May, 09:28 pm
Operational
303ms15 May, 08:31 pm
Operational
271ms15 May, 07:32 pm
Operational
266ms15 May, 06:26 pm
Operational
295ms15 May, 05:24 pm
Operational
277ms15 May, 04:07 pm
Operational
296ms15 May, 02:47 pm
Operational
221ms15 May, 01:22 pm
Operational
256ms15 May, 12:04 pm
Other LLM Providers
OpenAI
GPT-4o, o3, o1 — the benchmark everyone chases
Anthropic
Claude — safety-first reasoning and long context
DeepSeek
DeepSeek V3, R1 — elite reasoning at fraction of cost
xAI Grok
Grok-3 — real-time web access, strong reasoning
AWS Bedrock
Managed foundation models — Claude, Llama, Titan on AWS
Azure OpenAI
OpenAI models on Microsoft Azure — enterprise SLA