OpenRouter
InferenceFREE TIERUnified API across 200+ models — route by price or speed
Operational
All systems responding normally
Last checked 05/04/2026, 9:50:13 am
278ms response
Uptime
94.00%
Avg Latency
257ms
P95 Latency
484ms
Fastest
124ms
Checks
150
Response Time
Last 60 checks💰 Pricing
Routes to 200+ models. Prices vary per model. See openrouter.ai/models
Free models available
⚡ Rate Limits
Free models available. Limits per model.
🤖 Models (1)
| Model | Task | Context | Vision | Tools | JSON |
|---|---|---|---|---|---|
| Llama 3.3 70B Instruct Routes to 200+ models. Prices vary per model. | llm | 128k | — | ✅ | ✅ |
Recent Checks
Showing last 15Other Inference Providers
Groq
LPU inference — fastest tokens per second on the market
Cerebras
Wafer-scale chip inference — 1,000+ tokens/sec
Together AI
Open-source model inference — Llama, Mixtral, FLUX
Fireworks AI
Fast open-model inference — FireFunction, Llama, Mixtral
Hugging Face
Serverless inference API — 100k+ open models on demand
fal.ai
Ultra-fast image & video model inference for agents