Groq
InferenceFREE TIERLPU inference — fastest tokens per second on the market
Operational
All systems responding normally
Last checked 09/04/2026, 6:41:55 pm
324ms response
Uptime
100.00%
Avg Latency
354ms
P95 Latency
438ms
Fastest
215ms
Checks
150
Response Time
Last 60 checks💰 Pricing
Fastest inference. mixtral-8x7b: $0.24/$0.24
14,400 req/day free
⚡ Rate Limits
Per-model limits. llama-3.3-70b: 6k TPM free.
$0 plan with credit card.
🤖 Models (2)
| Model | Task | Context | Vision | Tools | JSON |
|---|---|---|---|---|---|
| Llama 3.3 70B LPU inference — extremely fast | llm | 128k | — | ✅ | ✅ |
| Mixtral 8x7B Cheapest on Groq: $0.24/$0.24 | llm | 33k | — | ✅ | ✅ |
Recent Checks
Showing last 15Other Inference Providers
Cerebras
Wafer-scale chip inference — 1,000+ tokens/sec
Together AI
Open-source model inference — Llama, Mixtral, FLUX
Fireworks AI
Fast open-model inference — FireFunction, Llama, Mixtral
OpenRouter
Unified API across 200+ models — route by price or speed
Hugging Face
Serverless inference API — 100k+ open models on demand
fal.ai
Ultra-fast image & video model inference for agents