Groq

Groq

InferenceFREE TIER

LPU inference — fastest tokens per second on the market

Operational

All systems responding normally

Last checked 09/04/2026, 6:41:55 pm

324ms response

Uptime History100.00% uptime
2026-04-05Today

Uptime

100.00%

Avg Latency

354ms

P95 Latency

438ms

Fastest

215ms

Checks

150

Response Time

Last 60 checks
215ms min354ms avg615ms max

💰 Pricing

llama-3.3-70bFREE
Input: $0.59/1MOutput: $0.79/1M

Fastest inference. mixtral-8x7b: $0.24/$0.24

14,400 req/day free

⚡ Rate Limits

free
RPM: 30TPM: 6,000RPD: 14,400TPD: 500,000

Per-model limits. llama-3.3-70b: 6k TPM free.

dev
RPM: 100TPM: 15,000

$0 plan with credit card.

🤖 Models (2)

ModelTaskContextVisionToolsJSON
Llama 3.3 70B

LPU inference — extremely fast

llm128k
Mixtral 8x7B

Cheapest on Groq: $0.24/$0.24

llm33k

Recent Checks

Showing last 15
Operational
324ms9 Apr, 06:41 pm
Operational
345ms9 Apr, 05:56 pm
Operational
351ms9 Apr, 05:18 pm
Operational
364ms9 Apr, 04:44 pm
Operational
265ms9 Apr, 03:57 pm
Operational
377ms9 Apr, 03:17 pm
Operational
400ms9 Apr, 02:25 pm
Operational
349ms9 Apr, 01:29 pm
Operational
352ms9 Apr, 12:19 pm
Operational
373ms9 Apr, 11:23 am
Operational
615ms9 Apr, 10:48 am
Operational
319ms9 Apr, 10:01 am
Operational
368ms9 Apr, 09:46 am
Operational
283ms9 Apr, 09:20 am
Operational
312ms9 Apr, 08:58 am