Scaleway AI
InferenceEuropean cloud AI inference — GDPR-compliant, open models
Degraded
Elevated response times or partial issues
Last checked 06/04/2026, 12:45:20 am
555ms response
Uptime History0.00% uptime
2026-04-03Today
Uptime
0.00%
Avg Latency
590ms
P95 Latency
737ms
Fastest
235ms
Checks
150
Response Time
Last 60 checks235ms min590ms avg4844ms max
💰 Pricing
llama-3.3-70b
Input: $0.25/1MOutput: $0.25/1M
EU-based inference. Prices vary by region.
⚡ Rate Limits
standard
RPM: 100TPM: 100,000
EU data residency. Pay-as-you-go. Limits per region.
🤖 Models (1)
| Model | Task | Context | Vision | Tools | JSON |
|---|---|---|---|---|---|
| Llama 3.3 70B European cloud AI. GDPR-compliant inference. | llm | 128k | — | ✅ | ✅ |
Recent Checks
Showing last 15Degraded
555ms6 Apr, 12:45 am
Degraded
839ms6 Apr, 12:23 am
Degraded
594ms6 Apr, 12:03 am
Degraded
507ms5 Apr, 11:42 pm
Degraded
809ms5 Apr, 11:18 pm
Degraded
518ms5 Apr, 10:48 pm
Degraded
470ms5 Apr, 10:23 pm
Degraded
543ms5 Apr, 10:00 pm
Degraded
445ms5 Apr, 09:49 pm
Degraded
690ms5 Apr, 09:31 pm
Degraded
592ms5 Apr, 09:14 pm
Degraded
490ms5 Apr, 08:56 pm
Degraded
653ms5 Apr, 08:38 pm
Degraded
600ms5 Apr, 08:20 pm
Degraded
576ms5 Apr, 08:03 pm
Other Inference Providers
Groq
LPU inference — fastest tokens per second on the market
Cerebras
Wafer-scale chip inference — 1,000+ tokens/sec
Together AI
Open-source model inference — Llama, Mixtral, FLUX
Fireworks AI
Fast open-model inference — FireFunction, Llama, Mixtral
OpenRouter
Unified API across 200+ models — route by price or speed
Hugging Face
Serverless inference API — 100k+ open models on demand