DeepInfra

DeepInfra

Inference

Cheap serverless inference — Llama, Mistral, FLUX at low cost

Operational

All systems responding normally

Last checked 09/04/2026, 6:41:55 pm

471ms response

Uptime History100.00% uptime
2026-04-05Today

Uptime

100.00%

Avg Latency

525ms

P95 Latency

653ms

Fastest

407ms

Checks

150

Response Time

Last 60 checks
407ms min525ms avg1123ms max

💰 Pricing

llama-3.3-70b
Input: $0.23/1MOutput: $0.4/1M

Cheapest 70B inference. meta-llama-3.1-8b: $0.06/$0.06

⚡ Rate Limits

free
RPM: 30TPM: 120,000

Free: 30 RPM, 120K TPM. Paid: up to 1000 RPM.

🤖 Models (1)

ModelTaskContextVisionToolsJSON
Llama 3.3 70B

Cheap inference. $0.23/$0.40 per 1M tokens.

llm128k

Recent Checks

Showing last 15
Operational
471ms9 Apr, 06:41 pm
Operational
472ms9 Apr, 05:56 pm
Operational
521ms9 Apr, 05:18 pm
Operational
576ms9 Apr, 04:44 pm
Operational
519ms9 Apr, 03:57 pm
Operational
638ms9 Apr, 03:17 pm
Operational
554ms9 Apr, 02:25 pm
Operational
471ms9 Apr, 01:29 pm
Operational
572ms9 Apr, 12:19 pm
Operational
449ms9 Apr, 11:23 am
Operational
526ms9 Apr, 10:48 am
Operational
556ms9 Apr, 10:01 am
Operational
566ms9 Apr, 09:46 am
Operational
511ms9 Apr, 09:20 am
Operational
940ms9 Apr, 08:58 am