NVIDIA NIM
InferenceFREE TIEROptimised inference microservices — GPU-native, enterprise-grade
Operational
All systems responding normally
Last checked 05/04/2026, 9:50:13 am
261ms response
Uptime History100.00% uptime
2026-04-02Today
Uptime
100.00%
Avg Latency
301ms
P95 Latency
412ms
Fastest
173ms
Checks
150
Response Time
Last 60 checks173ms min301ms avg466ms max
💰 Pricing
llama-3.3-70bFREE
Input: $0.77/1MOutput: $0.77/1M
Build.nvidia.com free tier. Enterprise pricing via sales.
1000 credits free
⚡ Rate Limits
free
RPM: 40
1000 free API credits on signup.
🤖 Models (1)
| Model | Task | Context | Vision | Tools | JSON |
|---|---|---|---|---|---|
| Llama 3.3 70B Enterprise-grade inference microservices. | llm | 128k | — | ✅ | ✅ |
Recent Checks
Showing last 15Operational
261ms5 Apr, 09:50 am
Operational
465ms5 Apr, 09:32 am
Operational
380ms5 Apr, 09:10 am
Operational
337ms5 Apr, 08:52 am
Operational
241ms5 Apr, 08:37 am
Operational
298ms5 Apr, 08:19 am
Operational
358ms5 Apr, 08:01 am
Operational
272ms5 Apr, 07:49 am
Operational
323ms5 Apr, 07:32 am
Operational
293ms5 Apr, 07:10 am
Operational
296ms5 Apr, 06:53 am
Operational
285ms5 Apr, 06:37 am
Operational
221ms5 Apr, 06:19 am
Operational
242ms5 Apr, 06:00 am
Operational
268ms5 Apr, 05:49 am
Other Inference Providers
Groq
LPU inference — fastest tokens per second on the market
Cerebras
Wafer-scale chip inference — 1,000+ tokens/sec
Together AI
Open-source model inference — Llama, Mixtral, FLUX
Fireworks AI
Fast open-model inference — FireFunction, Llama, Mixtral
OpenRouter
Unified API across 200+ models — route by price or speed
Hugging Face
Serverless inference API — 100k+ open models on demand