Together AI

Together AI

InferenceFREE TIER

Open-source model inference — Llama, Mixtral, FLUX

Operational

All systems responding normally

Last checked 05/04/2026, 9:50:13 am

668ms response

Uptime History100.00% uptime
2026-04-02Today

Uptime

100.00%

Avg Latency

700ms

P95 Latency

879ms

Fastest

400ms

Checks

150

Response Time

Last 60 checks
400ms min700ms avg1372ms max

💰 Pricing

llama-3.3-70b-turbo
Input: $0.88/1MOutput: $0.88/1M

Free $1 credit on signup

⚡ Rate Limits

standard
RPM: 60

Limits vary by model and account tier.

🤖 Models (1)

ModelTaskContextVisionToolsJSON
Llama 3.3 70B Turbo

Open-source model inference. Fast and reliable.

llm128k

Recent Checks

Showing last 15
Operational
668ms5 Apr, 09:50 am
Operational
685ms5 Apr, 09:32 am
Operational
744ms5 Apr, 09:10 am
Operational
803ms5 Apr, 08:52 am
Operational
724ms5 Apr, 08:37 am
Operational
707ms5 Apr, 08:19 am
Operational
787ms5 Apr, 08:01 am
Operational
680ms5 Apr, 07:49 am
Operational
985ms5 Apr, 07:32 am
Operational
691ms5 Apr, 07:10 am
Operational
771ms5 Apr, 06:53 am
Operational
635ms5 Apr, 06:37 am
Operational
545ms5 Apr, 06:19 am
Operational
852ms5 Apr, 06:00 am
Operational
758ms5 Apr, 05:49 am