Novita AI
Inference200+ models via one API — LLM, image, video, TTS
Operational
All systems responding normally
Last checked 06/04/2026, 12:45:20 am
382ms response
Uptime History100.00% uptime
2026-04-03Today
Uptime
100.00%
Avg Latency
502ms
P95 Latency
621ms
Fastest
377ms
Checks
150
Response Time
Last 60 checks377ms min502ms avg707ms max
💰 Pricing
llama-3.3-70b
Input: $0.45/1MOutput: $0.45/1M
LLM + image + video + TTS in one API
⚡ Rate Limits
free
RPM: 60TPM: 120,000
Free: 60 RPM. Paid: higher limits available.
🤖 Models (1)
| Model | Task | Context | Vision | Tools | JSON |
|---|---|---|---|---|---|
| Llama 3.3 70B 200+ models via one API. LLM/image/video/TTS. | llm | 128k | — | ✅ | ✅ |
Recent Checks
Showing last 15Operational
382ms6 Apr, 12:45 am
Operational
621ms6 Apr, 12:23 am
Operational
474ms6 Apr, 12:03 am
Operational
425ms5 Apr, 11:42 pm
Operational
577ms5 Apr, 11:18 pm
Operational
405ms5 Apr, 10:48 pm
Operational
418ms5 Apr, 10:23 pm
Operational
629ms5 Apr, 10:00 pm
Operational
487ms5 Apr, 09:49 pm
Operational
527ms5 Apr, 09:31 pm
Operational
469ms5 Apr, 09:14 pm
Operational
467ms5 Apr, 08:56 pm
Operational
500ms5 Apr, 08:38 pm
Operational
563ms5 Apr, 08:20 pm
Operational
440ms5 Apr, 08:03 pm
Other Inference Providers
Groq
LPU inference — fastest tokens per second on the market
Cerebras
Wafer-scale chip inference — 1,000+ tokens/sec
Together AI
Open-source model inference — Llama, Mixtral, FLUX
Fireworks AI
Fast open-model inference — FireFunction, Llama, Mixtral
OpenRouter
Unified API across 200+ models — route by price or speed
Hugging Face
Serverless inference API — 100k+ open models on demand