NVIDIA NIM
InferenceFREE TIEROptimised inference microservices — GPU-native, enterprise-grade
Operational
All systems responding normally
Last checked 15/05/2026, 1:22:41 pm
465ms response
Uptime History100.00% uptime
2026-05-10Today
Uptime
100.00%
Avg Latency
429ms
P95 Latency
766ms
Fastest
225ms
Checks
150
Response Time
Last 60 checks225ms min429ms avg999ms max
💰 Pricing
llama-3.3-70bFREE
Input: $0.77/1MOutput: $0.77/1M
Build.nvidia.com free tier. Enterprise pricing via sales.
1000 credits free
⚡ Rate Limits
free
RPM: 40
1000 free API credits on signup.
🤖 Models (1)
| Model | Task | Context | Vision | Tools | JSON |
|---|---|---|---|---|---|
| Llama 3.3 70B Enterprise-grade inference microservices. | llm | 128k | — | ✅ | ✅ |
Recent Checks
Showing last 15Operational
465ms15 May, 01:22 pm
Operational
766ms15 May, 12:04 pm
Operational
333ms15 May, 11:10 am
Operational
533ms15 May, 10:05 am
Operational
765ms15 May, 09:38 am
Operational
427ms15 May, 09:10 am
Operational
628ms15 May, 08:41 am
Operational
597ms15 May, 08:17 am
Operational
708ms15 May, 07:46 am
Operational
433ms15 May, 07:15 am
Operational
504ms15 May, 06:43 am
Operational
368ms15 May, 06:06 am
Operational
625ms15 May, 05:24 am
Operational
547ms15 May, 04:48 am
Operational
508ms15 May, 04:01 am
Other Inference Providers
Groq
LPU inference — fastest tokens per second on the market
Cerebras
Wafer-scale chip inference — 1,000+ tokens/sec
Together AI
Open-source model inference — Llama, Mixtral, FLUX
Fireworks AI
Fast open-model inference — FireFunction, Llama, Mixtral
OpenRouter
Unified API across 200+ models — route by price or speed
Hugging Face
Serverless inference API — 100k+ open models on demand