NVIDIA NIM
InferenceFREE TIEROptimised inference microservices — GPU-native, enterprise-grade
Operational
All systems responding normally
Last checked 25/04/2026, 11:42:30 am
227ms response
Uptime History99.33% uptime
2026-04-21Today
Uptime
99.33%
Avg Latency
303ms
P95 Latency
419ms
Fastest
214ms
Checks
150
Response Time
Last 60 checks214ms min303ms avg494ms max
💰 Pricing
llama-3.3-70bFREE
Input: $0.77/1MOutput: $0.77/1M
Build.nvidia.com free tier. Enterprise pricing via sales.
1000 credits free
⚡ Rate Limits
free
RPM: 40
1000 free API credits on signup.
🤖 Models (1)
| Model | Task | Context | Vision | Tools | JSON |
|---|---|---|---|---|---|
| Llama 3.3 70B Enterprise-grade inference microservices. | llm | 128k | — | ✅ | ✅ |
Recent Checks
Showing last 15Operational
227ms25 Apr, 11:42 am
Operational
351ms25 Apr, 10:58 am
Operational
275ms25 Apr, 10:04 am
Operational
225ms25 Apr, 09:46 am
Operational
244ms25 Apr, 09:25 am
Operational
239ms25 Apr, 09:04 am
Operational
230ms25 Apr, 08:46 am
Operational
306ms25 Apr, 08:22 am
Operational
278ms25 Apr, 07:58 am
Operational
365ms25 Apr, 07:35 am
Operational
271ms25 Apr, 07:10 am
Operational
239ms25 Apr, 06:50 am
Operational
266ms25 Apr, 06:27 am
Operational
352ms25 Apr, 06:00 am
Operational
376ms25 Apr, 05:39 am
Other Inference Providers
Groq
LPU inference — fastest tokens per second on the market
Cerebras
Wafer-scale chip inference — 1,000+ tokens/sec
Together AI
Open-source model inference — Llama, Mixtral, FLUX
Fireworks AI
Fast open-model inference — FireFunction, Llama, Mixtral
OpenRouter
Unified API across 200+ models — route by price or speed
Hugging Face
Serverless inference API — 100k+ open models on demand