Mistral AI
LLMFREE TIERMistral Large, Codestral — European frontier models
Operational
All systems responding normally
Last checked 05/04/2026, 9:50:13 am
324ms response
Uptime History99.33% uptime
2026-04-02Today
Uptime
99.33%
Avg Latency
287ms
P95 Latency
406ms
Fastest
153ms
Checks
150
Response Time
Last 60 checks153ms min287ms avg5100ms max
💰 Pricing
mistral-large-2
Input: $2/1MOutput: $6/1M
mistral-small: $0.10/$0.30
⚡ Rate Limits
free
RPM: 1TPM: 500,000
Experimental access only. Very strict.
premier
RPM: 60
Paid plan. Higher limits on request.
🤖 Models (3)
| Model | Task | Context | Vision | Tools | JSON |
|---|---|---|---|---|---|
| Mistral Large 2 | llm | 131k | — | ✅ | ✅ |
| Codestral Code specialist. Best context for code tasks | code | 256k | — | ✅ | ✅ |
| Mistral Small $0.10/$0.30 per 1M | llm | 33k | — | ✅ | ✅ |
Recent Checks
Showing last 15Operational
324ms5 Apr, 09:50 am
Operational
182ms5 Apr, 09:32 am
Operational
426ms5 Apr, 09:10 am
Operational
246ms5 Apr, 08:52 am
Operational
314ms5 Apr, 08:37 am
Operational
290ms5 Apr, 08:19 am
Operational
285ms5 Apr, 08:01 am
Operational
284ms5 Apr, 07:49 am
Operational
322ms5 Apr, 07:32 am
Operational
281ms5 Apr, 07:10 am
Operational
332ms5 Apr, 06:53 am
Operational
439ms5 Apr, 06:37 am
Operational
315ms5 Apr, 06:19 am
Operational
341ms5 Apr, 06:00 am
Operational
288ms5 Apr, 05:49 am
Other LLM Providers
OpenAI
GPT-4o, o3, o1 — the benchmark everyone chases
Anthropic
Claude — safety-first reasoning and long context
DeepSeek
DeepSeek V3, R1 — elite reasoning at fraction of cost
xAI Grok
Grok-3 — real-time web access, strong reasoning
AWS Bedrock
Managed foundation models — Claude, Llama, Titan on AWS
Azure OpenAI
OpenAI models on Microsoft Azure — enterprise SLA