Hugging Face
InferenceFREE TIERServerless inference API — 100k+ open models on demand
Operational
All systems responding normally
Last checked 25/04/2026, 11:42:30 am
189ms response
Uptime History100.00% uptime
2026-04-21Today
Uptime
100.00%
Avg Latency
207ms
P95 Latency
263ms
Fastest
70ms
Checks
150
Response Time
Last 60 checks70ms min207ms avg382ms max
💰 Pricing
serverless inferenceFREE
Dedicated endpoints from $0.06/hr. Model pricing varies.
Free tier with rate limits
⚡ Rate Limits
free
RPM: 30TPM: 60,000
Free inference: 30 RPM, 60K TPM per model.
🤖 Models (1)
| Model | Task | Context | Vision | Tools | JSON |
|---|---|---|---|---|---|
| Llama 3.3 70B Instruct Serverless inference. 100k+ models on demand. | llm | 128k | — | ✅ | ✅ |
Recent Checks
Showing last 15Operational
189ms25 Apr, 11:42 am
Operational
195ms25 Apr, 10:58 am
Operational
263ms25 Apr, 10:04 am
Operational
198ms25 Apr, 09:46 am
Operational
197ms25 Apr, 09:25 am
Operational
211ms25 Apr, 09:04 am
Operational
227ms25 Apr, 08:46 am
Operational
218ms25 Apr, 08:22 am
Operational
215ms25 Apr, 07:58 am
Operational
111ms25 Apr, 07:35 am
Operational
229ms25 Apr, 07:10 am
Operational
195ms25 Apr, 06:50 am
Operational
158ms25 Apr, 06:27 am
Operational
208ms25 Apr, 06:00 am
Operational
201ms25 Apr, 05:39 am
Other Inference Providers
Groq
LPU inference — fastest tokens per second on the market
Cerebras
Wafer-scale chip inference — 1,000+ tokens/sec
Together AI
Open-source model inference — Llama, Mixtral, FLUX
Fireworks AI
Fast open-model inference — FireFunction, Llama, Mixtral
OpenRouter
Unified API across 200+ models — route by price or speed
fal.ai
Ultra-fast image & video model inference for agents