DeepInfra

DeepInfra

Inference

Cheap serverless inference — Llama, Mistral, FLUX at low cost

Operational

All systems responding normally

Last checked 29/04/2026, 9:01:40 pm

532ms response

Uptime History100.00% uptime
2026-04-25Today

Uptime

100.00%

Avg Latency

554ms

P95 Latency

655ms

Fastest

395ms

Checks

150

Response Time

Last 60 checks
395ms min554ms avg2084ms max

💰 Pricing

llama-3.3-70b
Input: $0.23/1MOutput: $0.4/1M

Cheapest 70B inference. meta-llama-3.1-8b: $0.06/$0.06

⚡ Rate Limits

free
RPM: 30TPM: 120,000

Free: 30 RPM, 120K TPM. Paid: up to 1000 RPM.

🤖 Models (1)

ModelTaskContextVisionToolsJSON
Llama 3.3 70B

Cheap inference. $0.23/$0.40 per 1M tokens.

llm128k

Recent Checks

Showing last 15
Operational
532ms29 Apr, 09:01 pm
Operational
547ms29 Apr, 08:15 pm
Operational
486ms29 Apr, 07:20 pm
Operational
852ms29 Apr, 06:30 pm
Operational
476ms29 Apr, 05:33 pm
Operational
647ms29 Apr, 04:38 pm
Operational
557ms29 Apr, 03:40 pm
Operational
534ms29 Apr, 02:31 pm
Operational
446ms29 Apr, 01:10 pm
Operational
478ms29 Apr, 11:52 am
Operational
485ms29 Apr, 11:07 am
Operational
501ms29 Apr, 10:00 am
Operational
495ms29 Apr, 09:30 am
Operational
538ms29 Apr, 09:05 am
Operational
1749ms29 Apr, 08:39 am