Together AI
InferenceFREE TIEROpen-source model inference — Llama, Mixtral, FLUX
Operational
All systems responding normally
Last checked 15/05/2026, 1:22:41 pm
806ms response
Uptime History100.00% uptime
2026-05-10Today
Uptime
100.00%
Avg Latency
785ms
P95 Latency
1112ms
Fastest
416ms
Checks
150
Response Time
Last 60 checks416ms min785ms avg1451ms max
💰 Pricing
llama-3.3-70b-turbo
Input: $0.88/1MOutput: $0.88/1M
Free $1 credit on signup
⚡ Rate Limits
standard
RPM: 60
Limits vary by model and account tier.
🤖 Models (1)
| Model | Task | Context | Vision | Tools | JSON |
|---|---|---|---|---|---|
| Llama 3.3 70B Turbo Open-source model inference. Fast and reliable. | llm | 128k | — | ✅ | ✅ |
Recent Checks
Showing last 15Operational
806ms15 May, 01:22 pm
Operational
1057ms15 May, 12:04 pm
Operational
640ms15 May, 11:10 am
Operational
973ms15 May, 10:05 am
Operational
1118ms15 May, 09:38 am
Operational
761ms15 May, 09:10 am
Operational
1029ms15 May, 08:41 am
Operational
958ms15 May, 08:17 am
Operational
1068ms15 May, 07:46 am
Operational
835ms15 May, 07:15 am
Operational
852ms15 May, 06:43 am
Operational
724ms15 May, 06:06 am
Operational
891ms15 May, 05:24 am
Operational
1077ms15 May, 04:48 am
Operational
830ms15 May, 04:01 am
Other Inference Providers
Groq
LPU inference — fastest tokens per second on the market
Cerebras
Wafer-scale chip inference — 1,000+ tokens/sec
Fireworks AI
Fast open-model inference — FireFunction, Llama, Mixtral
OpenRouter
Unified API across 200+ models — route by price or speed
Hugging Face
Serverless inference API — 100k+ open models on demand
fal.ai
Ultra-fast image & video model inference for agents