Fireworks AI
InferenceFREE TIERFast open-model inference — FireFunction, Llama, Mixtral
Operational
All systems responding normally
Last checked 05/04/2026, 9:50:13 am
228ms response
Uptime History100.00% uptime
2026-04-02Today
Uptime
100.00%
Avg Latency
165ms
P95 Latency
257ms
Fastest
67ms
Checks
150
Response Time
Last 60 checks67ms min165ms avg404ms max
💰 Pricing
llama-v3p3-70b-instruct
Input: $0.9/1MOutput: $0.9/1M
Free tier: 40 req/min up to 10k tokens
⚡ Rate Limits
free
RPM: 40TPM: 10,000Concurrent: 5
Free tier. Paid plans remove limits.
🤖 Models (1)
| Model | Task | Context | Vision | Tools | JSON |
|---|---|---|---|---|---|
| Llama 3.3 70B Instruct Fast inference. DeepSeek and Qwen also available. | llm | 128k | — | ✅ | ✅ |
Recent Checks
Showing last 15Operational
228ms5 Apr, 09:50 am
Operational
137ms5 Apr, 09:32 am
Operational
358ms5 Apr, 09:10 am
Operational
148ms5 Apr, 08:52 am
Operational
170ms5 Apr, 08:37 am
Operational
204ms5 Apr, 08:19 am
Operational
198ms5 Apr, 08:01 am
Operational
198ms5 Apr, 07:49 am
Operational
197ms5 Apr, 07:32 am
Operational
229ms5 Apr, 07:10 am
Operational
198ms5 Apr, 06:53 am
Operational
348ms5 Apr, 06:37 am
Operational
202ms5 Apr, 06:19 am
Operational
241ms5 Apr, 06:00 am
Operational
219ms5 Apr, 05:49 am
Other Inference Providers
Groq
LPU inference — fastest tokens per second on the market
Cerebras
Wafer-scale chip inference — 1,000+ tokens/sec
Together AI
Open-source model inference — Llama, Mixtral, FLUX
OpenRouter
Unified API across 200+ models — route by price or speed
Hugging Face
Serverless inference API — 100k+ open models on demand
fal.ai
Ultra-fast image & video model inference for agents